DenseNet-121 and Camelyon17 #99
-
Hello, It seems as though for the Camelyon17 experiments, you use an input shape of 3 x 96 x 96. However, in PyTorch's documentation, it says that the input shape for DenseNet needs to be at least 3 x 224 x 224 (https://pytorch.org/hub/pytorch_vision_densenet/). I am wondering if you could either point me to where in the code the shape of the input is reconfigured to meet this requirement or clarify why reshaping is not necessary. Thank you for all your help! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hello! We don't use the pre-trained DenseNet models for Camelyon17, and the minimum size in the code is actually just 29x29: https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py#L260 We did try reshaping it to 3 x 224 x 224; IIRC this improved performance slightly but was considerably slower, so we opted to keep the smaller size. |
Beta Was this translation helpful? Give feedback.
Hello! We don't use the pre-trained DenseNet models for Camelyon17, and the minimum size in the code is actually just 29x29: https://github.com/pytorch/vision/blob/main/torchvision/models/densenet.py#L260
We did try reshaping it to 3 x 224 x 224; IIRC this improved performance slightly but was considerably slower, so we opted to keep the smaller size.