-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@functools.cache destroys the function signature #11280
Comments
This should now be expressible using |
I think the conclusion we reached in previous attempts was that I think the best bet for getting stuff like this working is a potential future |
I'm confused. Is there something about I noticed class Demo1:
def foo(self) -> None: ...
@functools.wraps(foo)
def bar(self): ...
Demo1().bar() # Argument missing for parameter "self" But class Demo2:
@contextlib.contextmanager
def foo(self) -> Iterator[int]: ...
with Demo2().foo() as output:
reveal_type(output) # int What gives? Why are some decorators correct and some incorrect/impossible? Also, allow me to repeat/rephrase what I said earlier: If a perfect solution isn't possible, I'd strongly prefer a solution that gives me useful IntelliSense. |
typeshed/stdlib/contextlib.pyi Line 76 in c51de8e
Line 201 in c51de8e
Unfortunately, this does indeed make it quite a bit harder to type these particular decorators than
I agree the current situation is deeply unsatisfactory (I think we all do); it's just a question of whether we can find a solution that doesn't cause an unacceptable number of false positives when type checkers are run on user code. FWIW I think #7771 came close, and I'd be interested in looking at that approach again (though I don't have time right now) |
Wait, |
@Aran-Fey The almost concerns calling But things are yet again different when you use a decorator and return a All of this special casing is also part of what makes it so difficult to write a descriptor that transparently works the same way. |
I see, thanks! But boy, what a mess. |
You can fix this by running: import functools
def my_function() -> None:
pass
my_function = functools.wraps(my_function)(functools.cache(my_function)) Instead of: import functools
@functools.cache
def my_function() -> None:
pass |
While this fixes the incorrect signature, you lose the extra attributes that |
I assumed this would fall under no type reassignments, but it looks like there's special casing to make this equivalent to decorating the function, but it also looks like your fix doesn't actually preserve the function signature, at least not in mypy or pyright: In fact there's no way to have both the extra methods and the correct signature at the same time with the current type stubs, since In pyright/pylance you actually have the worst of both worlds with your fix, because you both erase the function signature and Perhaps the reason you were fooled into thinking this was a fix for the issue, is due to the fact that Jedi or any LSP built on top of it would show the correct function signature this way. But type checkers still infer the wrong signature. Also just so we're clear, when I say "there's no way to make this work" I purely am talking about from a type checking perspective. At runtime this will work, but runtime isn't the issue here. Also you can work around this with your own custom descriptor that doesn't need to be able to handle functions, methods and classmethods all at the same time, if you split these into separate descriptors you can make it work right now, but we don't have this luxury with the standard library. |
I strongly agree with @Aran-Fey that until this is solved, it would be much better to preserve the function signature at the cost of not checking that parameters are hashable than what is currently there. If you decorate a function that has non-hashable parameters, you'll have a runtime error the first time you run your code. |
Unfortunately that is not the only cost. The additional cost and the main reason this is annotated the way it currently is, is that we will see false positives anywhere people try to call It would be easy to preserve the function signature if we didn't need to make sure those methods are available. |
@Daverball My issue was that |
I'm aware there's a (slightly stale) discussion about changing from __future__ import annotations
from typing import overload, Any, Protocol, Self, TYPE_CHECKING
from collections.abc import Callable
from functools import cache as _cache
from typing import overload, Any, Concatenate, Hashable, override, assert_type
class ProtoMethod[T, **Pc, Rc](Protocol):
def __call__(__self, self: T, *args: Pc.args, **kwds: Pc.kwargs) -> Rc:
...
class MethodDescriptor[T, **P, R](Protocol):
def __call__(__self, self: T, *args: P.args, **kwds: P.kwargs) -> R:
...
@overload
def __get__(self, instance: None, owner: type[T]) -> Self:
...
@overload
def __get__(self, instance: T, owner: type[T]) -> FunctionDescriptor[P, R]:
...
def __get__(self, instance: T | None, owner: type[T]) -> Any:
...
def cache_info(self) -> Any: ...
def cache_clear(self) -> None: ...
def cache_parameters(self) -> Any: ...
class ProtoClassmethod[T, **Pc, Rc](Protocol):
def __call__(__self, cls: type[T], *args: Pc.args, **kwds: Pc.kwargs) -> Rc:
...
class ClassmethodDescriptor[T, **P, R](Protocol):
def __call__(self, cls: type[T], *args: P.args, **kwds: P.kwargs) -> R:
...
@overload
def __get__(self, instance: None, owner: type[T]) -> FunctionDescriptor[P, R]:
...
@overload
def __get__(self, instance: T, owner: type[T]) -> FunctionDescriptor[P, R]:
...
def __get__(self, instance: T | None, owner: type[T]) -> FunctionDescriptor[P, R]:
...
def cache_info(self) -> Any: ...
def cache_clear(self) -> None: ...
def cache_parameters(self) -> Any: ...
class ProtoFunction[**Pc, Rc](Protocol):
def __call__(
__self,
*args: Pc.args,
**kwds: Pc.kwargs
) -> Rc:
...
class FunctionDescriptor[**Pc, Rc](Protocol):
def __call__(
__self,
*args: Pc.args,
**kwds: Pc.kwargs
) -> Rc:
...
def cache_info(self) -> Any: ...
def cache_clear(self) -> None: ...
def cache_parameters(self) -> Any: ...
class FnHashable[R](Protocol):
def __call__(self, *args: Hashable, **kwds: Hashable) -> R:
...
def cache_info(self) -> Any: ...
def cache_clear(self) -> None: ...
def cache_parameters(self) -> Any: ...
class ClsHashable[T, **P, R](Protocol):
def __call__(__self, cls: type[T], *args: Hashable, **kwds: Hashable) -> R:
...
def cache_info(self) -> Any: ...
def cache_clear(self) -> None: ...
def cache_parameters(self) -> Any: ...
@overload
def __get__(self, instance: None, owner: type[T]) -> FunctionDescriptor[P, R]:
...
@overload
def __get__(self, instance: T, owner: type[T]) -> FunctionDescriptor[P, R]:
...
def __get__(self, instance: T | None, owner: type[T]) -> FunctionDescriptor[P, R]:
...
@overload
def cache[T, **P, R](fn: ProtoMethod[T, P, R]) -> MethodDescriptor[T, P, R] | FnHashable[R]:
...
@overload
def cache[T, **P, R](fn: ProtoClassmethod[T, P, R]) -> ClassmethodDescriptor[T, P, R] | ClsHashable[T, P, R]:
...
@overload
def cache[**P, R](fn: ProtoFunction[P, R]) -> FunctionDescriptor[P, R] | FnHashable[R]:
...
def cache(fn: Any) -> Any:
return _cache(fn)
class CFnCls:
@cache
def fn(self, arg: int) -> int:
print("method fn called")
return arg
@classmethod
@cache
def cls_fn(cls, arg: int) -> int:
print("class fn called")
return arg
@staticmethod
@cache
def st_fn(arg: int) -> int:
print("static fn called")
return arg
cfn_inst = CFnCls()
cfn_inst.fn(1)
CFnCls.st_fn(1)
CFnCls.cls_fn(1)
cfn_inst.fn(1)
CFnCls.st_fn(1)
CFnCls.cls_fn(1)
assert_type(cfn_inst.fn(1), int)
assert_type(CFnCls.st_fn(1), int)
assert_type(CFnCls.cls_fn(1), int)
CFnCls().fn.cache_clear()
CFnCls.fn.cache_clear()
CFnCls.st_fn.cache_clear()
CFnCls.cls_fn.cache_clear()
if TYPE_CHECKING:
CFnCls().fn(1, 1) # type error - correct
CFnCls.st_fn(1, 1) # type error - correct
CFnCls.cls_fn(1, 1) # type error - correct
@cache
def fn(arg: int) -> int:
return arg
fn(1)
assert_type(fn(1), int)
if TYPE_CHECKING:
fn(1, 2) # type error - correct
fn.cache_clear()
@overload
@cache
def fn_overload(arg: int) -> int:
...
@overload
@cache
def fn_overload(arg: str) -> str:
...
@cache
def fn_overload(arg: int | str) -> int | str:
return arg
fn_overload(1)
fn_overload("1")
# behaves same as builtin cache, merges overloads
assert_type(fn_overload(1), int | str)
assert_type(fn_overload("1"), int | str)
fn_overload.cache_clear()
if TYPE_CHECKING:
fn_overload({1,2}) # type error - correct
class Unhashable:
@override
def __eq__(self, value: object) -> bool:
return False
@cache
def no_cache(arg: Unhashable, arg2: int) -> None:
pass
@_cache
def bi_no_cache(arg: Unhashable, arg2: int) -> None:
pass
if TYPE_CHECKING:
# type error in pyright - correct
# no type error in mypy?
no_cache(Unhashable(), 2)
bi_no_cache(Unhashable(), 2) When mypy does corrrectly error it gives a slightly strange error, eg:
But otherwise seems to work, if this is of interest to others. It still doesn't preserve the docstring in intellisense but it avoids any obvious type errors at least. My intellisense autocomplete suggests If someone can find an example where this doesn't work lemme know. Or I can rewrite it using the older |
A function decorated with
@functools.cache
loses its signature:There is an error if you try to pass an argument that isn't hashable:
But this completely ruins Intellisense:
I understand that the type system isn't powerful enough to express the correct semantics, which would be this:
That said, I would much rather have functioning Intellisense than a warning about unhashable inputs.
The text was updated successfully, but these errors were encountered: