You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
utils.resize branches internally calling cv2.resize for single-channel data and skimage.transforms.resize for multichannel data. Comments indicate this was done because cv2 didn't support multichannel input, but it does now. There are two potential paths for improvement:
Refactor utils.resize to always use cv2, which is 50-100x faster.
Refactor utils.resize to always use skimage, which would remove the opencv dependency from the deepcell libraries.
If utils.resize is used frequently in pre/post processing, then option 1 makes the most sense.
While digging around I also noticed that there is a discrepancy in the values of boundary pixels between the two methods:
fromskimageimporttransformimportcv2rng=np.random.default_rng()
img=rng.random((32, 32))
out_shape= (40, 40)
# These parameters generally match the defaults of `utils.resize`rs_cv=cv2.resize(img, out_shape, interpolation=cv2.INTER_LINEAR)
rs_sk=transform.resize(
img, out_shape, mode="constant", preserve_range=True, order=1, anti_aliasing=True
)
plt.imshow(rs_cv-rs_sk)
plt.colorbar()
There's no difference (within floating point precision) for the central pixels, but the discrepancy for edge pixels is significant. This may affect any workflows where single-channel and multi-channel images are used together.
The text was updated successfully, but these errors were encountered:
utils.resize
branches internally callingcv2.resize
for single-channel data andskimage.transforms.resize
for multichannel data. Comments indicate this was done becausecv2
didn't support multichannel input, but it does now. There are two potential paths for improvement:utils.resize
to always usecv2
, which is 50-100x faster.utils.resize
to always useskimage
, which would remove theopencv
dependency from thedeepcell
libraries.If
utils.resize
is used frequently in pre/post processing, then option 1 makes the most sense.While digging around I also noticed that there is a discrepancy in the values of boundary pixels between the two methods:
There's no difference (within floating point precision) for the central pixels, but the discrepancy for edge pixels is significant. This may affect any workflows where single-channel and multi-channel images are used together.
The text was updated successfully, but these errors were encountered: