You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ CUDA_VISIBLE_DEVICES=0,1,2,3 srun --ntasks=2 -l python test.py
1:Traceback (most recent call last):
1: File ".../test.py", line 7, in<module>
1: conv = ht.convolve(dis_signal, dis_kernel_odd, mode='full')
1: File ".../heat-venv_2023/lib/python3.10/site-packages/heat/core/signal.py", line 161, in convolve
1: local_signal_filtered = fc.conv1d(signal, t_v1)
1: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument forargument weightin method wrapper__cudnn_convolution)
Version
main (development branch)
Python version
3.10
PyTorch version
1.12
MPI version
OpenMPI 4.1.4
The text was updated successfully, but these errors were encountered:
What happened?
convolve does not work if the kernel is distributed when more than one GPU is available.
Code snippet triggering the error
Error message or erroneous outcome
Version
main (development branch)
Python version
3.10
PyTorch version
1.12
MPI version
The text was updated successfully, but these errors were encountered: