Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot complete correctly for pytorch #1316

Closed
sillybun opened this issue Apr 25, 2019 · 13 comments
Closed

Cannot complete correctly for pytorch #1316

sillybun opened this issue Apr 25, 2019 · 13 comments

Comments

@sillybun
Copy link

source = "import torch\ntorch.ma"
script = jedi.Script(source, 2, len("torch.ma"), "e.py")
script.completions()

And i get:
[<Completion: manager_path>, <Completion: manual_seed>, <Completion: math>]

BUT, what I what is matmul. Why cannot jedi complete this function?

Besides, jedi cannot complete torch.from_numpy as well.

@sillybun
Copy link
Author

I find adding 'torch' to jedi.settings.auto_import_modules solves this problem. It there any better solutions?

@davidhalter
Copy link
Owner

I feel like Jedi has a really hard time finding matmul. By reading the pytorch code I have trouble to find it. Can you enlighten me where it is defined?

@sillybun
Copy link
Author

sillybun commented May 4, 2019

matmul appears in __init__pyi file:

__init__.pyi:    def matmul(self, other: 'Tensor', *, out: Optional['Tensor']=None) -> 'Tensor': ...
__init__.pyi:def matmul(self: Tensor, other: Tensor, *, out: Optional[Tensor]=None) -> Tensor: ...
onnx/symbolic.py:def matmul(g, self, other):

Since matmul is written in c++, it's hard to find its definition. But, maybe jedi can use the stub information to make completion.

Also, the docstring of matmul appears in _torch_docs.py file from line 2833 to line 2895 as following:

add_docstr(torch.matmul,
           r"""
matmul(tensor1, tensor2, out=None) -> Tensor

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors as follows:

- If both tensors are 1-dimensional, the dot product (scalar) is returned.
- If both arguments are 2-dimensional, the matrix-matrix product is returned.
- If the first argument is 1-dimensional and the second argument is 2-dimensional,
  a 1 is prepended to its dimension for the purpose of the matrix multiply.
  After the matrix multiply, the prepended dimension is removed.
- If the first argument is 2-dimensional and the second argument is 1-dimensional,
  the matrix-vector product is returned.
- If both arguments are at least 1-dimensional and at least one argument is
  N-dimensional (where N > 2), then a batched matrix multiply is returned.  If the first
  argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the
  batched matrix multiply and removed after.  If the second argument is 1-dimensional, a
  1 is appended to its dimension for the purpose of the batched matrix multiple and removed after.
  The non-matrix (i.e. batch) dimensions are :ref:`broadcasted <broadcasting-semantics>` (and thus
  must be broadcastable).  For example, if :attr:`tensor1` is a
  :math:`(j \times 1 \times n \times m)` tensor and :attr:`tensor2` is a :math:`(k \times m \times p)`
  tensor, :attr:`out` will be an :math:`(j \times k \times n \times p)` tensor.

.. note::

    The 1-dimensional dot product version of this function does not support an :attr:`out` parameter.

Arguments:
    tensor1 (Tensor): the first tensor to be multiplied
    tensor2 (Tensor): the second tensor to be multiplied
    out (Tensor, optional): the output tensor

Example::

    >>> # vector x vector
    >>> tensor1 = torch.randn(3)
    >>> tensor2 = torch.randn(3)
    >>> torch.matmul(tensor1, tensor2).size()
    torch.Size([])
    >>> # matrix x vector
    >>> tensor1 = torch.randn(3, 4)
    >> > tensor2 = torch.randn(4)
    >>> torch.matmul(tensor1, tensor2).size()
    torch.Size([3])
    >>> # batched matrix x broadcasted vector
    >>> tensor1 = torch.randn(10, 3, 4)
    >>> tensor2 = torch.randn(4)
    >>> torch.matmul(tensor1, tensor2).size()
    torch.Size([10, 3])
    >>> # batched matrix x batched matrix
    >>> tensor1 = torch.randn(10, 3, 4)
    >>> tensor2 = torch.randn(10, 4, 5)
    >>> torch.matmul(tensor1, tensor2).size()
    torch.Size([10, 3, 5])
    >>> # batched matrix x broadcasted matrix
    >>> tensor1 = torch.randn(10, 3, 4)
    >>> tensor2 = torch.randn(4, 5)
    >>> torch.matmul(tensor1, tensor2).size()
    torch.Size([10, 3, 5])

""")

@davidhalter
Copy link
Owner

Oh, now I see. Please wait for #839 and tell me if it's not working once that is finished.

Note to myself: The stubs are here: https://github.com/pytorch/pytorch/blob/master/torch/__init__.pyi.in

@Coderx7
Copy link

Coderx7 commented May 17, 2019

@davidhalter Hi, I faced the same issue with Pytorch, and created a new issue in the vscode section :
I don't know if we should continue this here or there, since on one hand this is related to Jedi, and on the other hand this is happening in vscode (Python extension related which uses Jedi).
Anyway here is the ticket on the vscode : #5463

@davidhalter
Copy link
Owner

davidhalter commented May 19, 2019

@Coderx7 Happy to have to conversation here. However it's now just a matter of time until we have a release. On the master branch you should be able to work with pytorch stubs.

Feel free to try :) You just need to clone Jedi, git submodule update --init and pip install -e .. There's probably still a few issues. I know of about 3 for now, but I'm fixing them in the next few days and after that the master branch should be better than what it was before.

I'm actively looking for feedback for this branch, so you're the perfect "customer" :).

PS: I'm not sure if that all works with vscode, but you could just do what I said and copy the whole Jedi folder to the place where vscode's Jedi resides.

@Coderx7
Copy link

Coderx7 commented May 20, 2019

@davidhalter , seems they have it baked into the Python extension and there is no separate folder for Jedi!
Unfortunately it seems we cant just upgrade to the newer version manually!, this needs to be done by the VSCode Python extension team themseleves :(

@davidhalter
Copy link
Owner

Why? The Jedi files need to be somewhere ;-)

@Coderx7
Copy link

Coderx7 commented May 21, 2019

I found it. it was located in the C:\Users\Your_User_Name\.vscode\extensions\ms-python.python-2019.4.12954\pythonFiles\lib\python\jedi
I have an older version of Jedi (0.9 I guess) in my Anaconda3 distribution (which is the default Python package in my system) and the version shipped with VSCode is 0.12.0 as it reads in the __init__.py.
So how do I go about it now? will just replacing the old files with the newer ones from the master branch do it? cause I cant be possibly using that old version in Anaconda!

@davidhalter
Copy link
Owner

0.9 is really old. That's probably one of the first "ok" versions, but please use a newer one. No idea why they ship such old versions. Please use the latest version. Switching from 0.12.0 to 0.13.2 should probably be worry-free, however switching from 0.9 might cause some problems (some things have been deprecated and removed since then).

@Coderx7
Copy link

Coderx7 commented May 22, 2019

I updated my anaconda's Jedi to the latest version. I also downloaded the latest master branch, and copied the Jedi subfolder to the VSCode Extensions folder, replacing the old version there. After doing so, the intellisense broke and didnt work anymore, I had to revert back my changes (i.e using the former 0.12.0 version in VSCode directory) .
Was this all I needed to do? or did I miss something?

@davidhalter
Copy link
Owner

For VSCode, I'm really unsure. I would probably need an exception to tell what's going on. There's probably logs somewhere or a debug function that you can enable.

@Coderx7
Copy link

Coderx7 commented Jun 7, 2019

Here is the log concerning my latest changes (updating the Jedi to the latest version) and this is the console log when the default jedi package is used and is working fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants