You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "Situation3.py", line 187, in
print("random.choice(style_dataset)",random.choice(style_dataset))
File "/home/guest/cwy/miniconda3/lib/python3.8/random.py", line 291, in choice
return seq[i]
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/datasets/folder.py", line 153, in getitem
sample = self.transform(sample)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 67, in call
img = t(img)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 823, in forward
i, j, h, w = self.get_params(img, self.scale, self.ratio)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 787, in get_params
log_ratio = torch.log(torch.tensor(ratio))
RuntimeError: log_vml_cpu not implemented for 'Long'
when I run python Situation3.py,I got this question .Does anyone has the same problem?How to solve this problem?Hope someone can reply
The text was updated successfully, but these errors were encountered:
Change
"scale=(256/480, 1), ratio=(1, 1)"
to
"scale=(256/480, 1.0), ratio=(1.0, 1.0)"
As in the Pytorch Docs:
CLASStorchvision.transforms.RandomResizedCrop(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=2)[SOURCE]
Crop the given image to random size and aspect ratio. The image can be a PIL Image or a Tensor, in which case it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions
A crop of random size (default: of 0.08 to 1.0) of the original size and a random aspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop is finally resized to given size. This is popularly used to train the Inception networks.
Parameters
size (int or sequence) – expected output size of each edge. If size is an int instead of sequence like (h, w), a square output size (size, size) is made. If provided a tuple or list of length 1, it will be interpreted as (size[0], size[0]).
scale (tuple of python:float) – range of size of the origin size cropped
ratio (tuple of python:float) – range of aspect ratio of the origin aspect ratio cropped.
interpolation (int) – Desired interpolation enum defined by filters. Default is PIL.Image.BILINEAR. If input is Tensor, only PIL.Image.NEAREST, PIL.Image.BILINEAR and PIL.Image.BICUBIC are supported.
Traceback (most recent call last):
File "Situation3.py", line 187, in
print("random.choice(style_dataset)",random.choice(style_dataset))
File "/home/guest/cwy/miniconda3/lib/python3.8/random.py", line 291, in choice
return seq[i]
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/datasets/folder.py", line 153, in getitem
sample = self.transform(sample)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 67, in call
img = t(img)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 823, in forward
i, j, h, w = self.get_params(img, self.scale, self.ratio)
File "/home/guest/cwy/miniconda3/lib/python3.8/site-packages/torchvision/transforms/transforms.py", line 787, in get_params
log_ratio = torch.log(torch.tensor(ratio))
RuntimeError: log_vml_cpu not implemented for 'Long'
when I run python Situation3.py,I got this question .Does anyone has the same problem?How to solve this problem?Hope someone can reply
The text was updated successfully, but these errors were encountered: