Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'tuple' object is not callable #4

Open
cdcseacave opened this issue Jun 20, 2023 · 8 comments
Open

TypeError: 'tuple' object is not callable #4

cdcseacave opened this issue Jun 20, 2023 · 8 comments

Comments

@cdcseacave
Copy link

I get the following error when running bash run.sh:

Setting up PyTorch plugin "gridsample_grad2"... /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)

  0%|          | 0/100000 [00:00<?, ?it/s]
  0%|          | 0/100000 [00:00<?, ?it/s]
Done.
Hello Wooden
Load data: Begin
Load data: End
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/lpips/weights/v0.1/vgg.pth
Traceback (most recent call last):
  File "exp_runner_pet.py", line 497, in <module>
    runner.train()
  File "exp_runner_pet.py", line 167, in train
    render_out = self.renderer.render(rays_o, rays_d, near, far,
  File "/mnt/d/apps/PET-NeuS/models/renderer_pet.py", line 364, in render
    gradient = self.sdf_network.gradient(pts.reshape(-1, 3)).squeeze().detach()
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 159, in gradient
    gradients = torch.autograd.grad(
  File "/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/autograd/__init__.py", line 303, in grad
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
  File "/mnt/d/apps/PET-NeuS/third_party/ops/grid_sample.py", line 59, in backward
    grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid, ctx.padding_mode, ctx.align_corners)
  File "/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/mnt/d/apps/PET-NeuS/third_party/ops/grid_sample.py", line 68, in forward
    grad_input, grad_grid = op(grad_output, input, grid, 0, padding_mode, align_corners, output_mask)
TypeError: 'tuple' object is not callable
@emiald
Copy link

emiald commented Jun 21, 2023

I meet the same problems and resolve it .You should debug the grid_sample.py and change it. op=torch._C._jit_get_operation('aten::grid_sampler_2d_backword') is a tuple that cause the problems.
Change it to op=torch._C._jit_get_operation('aten::grid_sampler_2d_backword')[0]

@cdcseacave
Copy link
Author

thx, that helps a bit, but now I have a new error:

Setting up PyTorch plugin "gridsample_grad2"... /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)

  0%|          | 0/100000 [00:00<?, ?it/s]
  0%|          | 0/100000 [00:00<?, ?it/s]
Done.
Hello Wooden
Load data: Begin
Load data: End
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/lpips/weights/v0.1/vgg.pth
Traceback (most recent call last):
  File "exp_runner_pet.py", line 497, in <module>
    runner.train()
  File "exp_runner_pet.py", line 167, in train
    render_out = self.renderer.render(rays_o, rays_d, near, far,
  File "/mnt/d/apps/PET-NeuS/models/renderer_pet.py", line 393, in render
    ret_fine = self.render_core(rays_o,
  File "/mnt/d/apps/PET-NeuS/models/renderer_pet.py", line 269, in render_core
    gradients = sdf_network.gradient(pts).squeeze()
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 157, in gradient
    y = self.sdf(x)
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 153, in sdf
    return self.forward(coordinates)[:, :1]
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 97, in forward
    return self.run_model(planes, self.decoder, coordinates.unsqueeze(0), directions, self.rendering_kwargs)
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 116, in run_model
    attn_windows = self.attn4(x_windows)
  File "/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/d/apps/PET-NeuS/models/swin_transformer.py", line 136, in forward
    attn = (q @ k.transpose(-2, -1))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 15.99 GiB total capacity; 13.59 GiB already allocated; 0 bytes free; 14.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@YunnaChen
Copy link

uld debug th
I also meet this problem, do you solve it?

@520jz
Copy link

520jz commented Aug 24, 2023

uld debug th
I also meet this problem, do you solve it?

Maybe, you can change the batch_size

@520jz
Copy link

520jz commented Aug 24, 2023

thx, that helps a bit, but now I have a new error:

Setting up PyTorch plugin "gridsample_grad2"... /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)

  0%|          | 0/100000 [00:00<?, ?it/s]
  0%|          | 0/100000 [00:00<?, ?it/s]
Done.
Hello Wooden
Load data: Begin
Load data: End
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/lpips/weights/v0.1/vgg.pth
Traceback (most recent call last):
  File "exp_runner_pet.py", line 497, in <module>
    runner.train()
  File "exp_runner_pet.py", line 167, in train
    render_out = self.renderer.render(rays_o, rays_d, near, far,
  File "/mnt/d/apps/PET-NeuS/models/renderer_pet.py", line 393, in render
    ret_fine = self.render_core(rays_o,
  File "/mnt/d/apps/PET-NeuS/models/renderer_pet.py", line 269, in render_core
    gradients = sdf_network.gradient(pts).squeeze()
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 157, in gradient
    y = self.sdf(x)
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 153, in sdf
    return self.forward(coordinates)[:, :1]
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 97, in forward
    return self.run_model(planes, self.decoder, coordinates.unsqueeze(0), directions, self.rendering_kwargs)
  File "/mnt/d/apps/PET-NeuS/models/triplane_pet.py", line 116, in run_model
    attn_windows = self.attn4(x_windows)
  File "/home/dan/miniconda3/envs/petneus/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/d/apps/PET-NeuS/models/swin_transformer.py", line 136, in forward
    attn = (q @ k.transpose(-2, -1))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 15.99 GiB total capacity; 13.59 GiB already allocated; 0 bytes free; 14.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Maybe, you can change the batch_size

@Hansen7777777
Copy link

but,when i change batch_size = 128, also have this question, maybe it never changes batch_size successful?

@520jz
Copy link

520jz commented Sep 24, 2023

but,when i change batch_size = 128, also have this question, maybe it never changes batch_size successful?

change batch_size=64

@Terry10086
Copy link

Has anyone successfully run the code? If yes, when batchsize is 64(or more), how much video memory is occupied?
Looking forward to your reply~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants