Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues when using pytorch 0.4 #31

Closed
Rahim16 opened this issue Jul 3, 2018 · 3 comments
Closed

Issues when using pytorch 0.4 #31

Rahim16 opened this issue Jul 3, 2018 · 3 comments

Comments

@Rahim16
Copy link

Rahim16 commented Jul 3, 2018

I get errors when trying to run both DNC and SDNC examples with pytorch 0.4.0. For DNC:

(py36) [mammadli@server test]$ python test_dnc.py 
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:118: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
  orthogonal(self.output.weight)
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:133: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
  xavier_uniform(h)
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/util.py:95: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  soft_max_2d = F.softmax(input_2d)
Traceback (most recent call last):
  File "test_dnc.py", line 19, in <module>
    rnn(torch.randn(10, 4, 64).cuda(), (controller_hidden, memory, read_vectors), True)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 265, in forward
    inputs = [self.output(i) for i in inputs]
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 265, in <listcomp>
    inputs = [self.output(i) for i in inputs]
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
    return F.linear(input, self.weight, self.bias)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 992, in linear
    return torch.addmm(bias, input, weight.t())
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'mat1'

For SDNC:

(py36) [mammadli@server test]$ python test_dnc.py 
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:118: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
  orthogonal(self.output.weight)
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/sparse_temporal_memory.py:65: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
  T.nn.init.orthogonal(self.interface_weights.weight)
/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py:133: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
  xavier_uniform(h)
Traceback (most recent call last):
  File "test_dnc.py", line 20, in <module>
    rnn(torch.randn(10, 4, 64).cuda(), (controller_hidden, memory, read_vectors), True)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 219, in forward
    controller_hidden, mem_hidden, last_read = self._init_hidden(hx, batch_size, reset_experience)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/dnc.py", line 144, in _init_hidden
    mhx = self.memories[0].reset(batch_size, erase=reset_experience)
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/sparse_temporal_memory.py", line 126, in reset
    'read_positions': cuda(T.arange(0, c).expand(b, c), gpu_id=self.gpu_id).long()
  File "/amd/home/mammadli/tools/anaconda2/envs/py36/lib/python3.6/site-packages/dnc/util.py", line 30, in cuda
    return var(x.pin_memory(), requires_grad=grad).cuda(gpu_id, async=True)
RuntimeError: invalid argument 3: Source tensor must be contiguous at /opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THC/generic/THCTensorCopy.c:114
@kierkegaard13
Copy link

Try running in cpu mode for now. I think there are issues with running this library with a gpu. Otherwise, you'll need to go into the code and move whatever tensors its complaining about to gpu mem with .cuda() or cuda(..., gpu_id=0).

@ixaxaar
Copy link
Owner

ixaxaar commented Jul 5, 2018

Yes, not at all tested with 0.4 will definitely break. Man I need to find the time to look into this. :(

@ixaxaar
Copy link
Owner

ixaxaar commented Apr 5, 2019

And after almost a year, this is done 👯‍♂️

@ixaxaar ixaxaar closed this as completed Apr 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants