Skip to content
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.

Commit

Permalink
Dev solution 5 (#66)
Browse files Browse the repository at this point in the history
* added parametrized loss weights

* Update pipelines.py

random code was pasted in pipelines

* Update postprocessing.py

bug-fix in crop/pad

* Update postprocessing.py

* two_unets

* Update neptune.yaml

dropped local data paths

* pull request fixes

* two specialist unets pipeline added

* Update neptune.yaml

* two unets pipeline added

* Update pipeline_config.py

added globals for specialists

* corrections in the neptune.yaml

* fixes for unet_specialists

* Improve scoring (#54)

* propose new (faster hopefully) method of counting score

* Update metrics.py

* corrections

* Update callbacks.py

hot-fixed averager update bug

* Update utils.py

submission generation fix

* Bug fix in pipelines, assertion that checks outputs length added. (#62)

* Bug fix in pipelines, assertion that checks outputs length added.

* assertion message corrected

* corrected order of elements in assertion

* weighted segmentation loss added (#60)

* weighted segmentation loss added

* weighted segmentation loss added

* formatting

* Update neptune.yaml

* Update pipeline_config.py

* Update validation.py

* names refactor

* namig refactor

* Update models.py

* refactor

* removed specialists, dropped contour_touching

* dropped specialists and contour-touching

* Update models.py

* Dev patching (#61)

* init

* added new postpro

* local

* patching works

* added test time augmentation

* cropping bugs fixed

* fixed callbacks volatile error, updated config, dropped debug from main

* dropped loader pickling

* added pad if smaller

* added more augmentation to the patching seq

* added mosaic padding to loaders updated augmentations

* added dev mode, updated config, added specialists with patching

* fixed mosaic loader bug

* Update main.py

dropped debug saving

* updated postprocessing, added fixes to patching

* updated postprocessing, added devmode, fixed loaders, changed mask preprocessing to get full masks and internal contours

* fixed mosaic for larger patches, adjusted min blob size in postpro

* pipelines with specialists and mulit with patching are working, dropped 0 channel load from loaders, minor fixes in loss def

* added small random crop/pads, fixed pipelines for no patching mode, added simple validation mode

* added artifact images to train

* added global seeding

* fixed checkersboard effect

* added normalization

* added blur to augmentations, added wireframe of scaling pipeline, reverted to vanila postprocessing

* added trainable rescaling loop

* fixed contour regeneration bug

* refactored contour generation, upgraded contour generation in rescaling, cleaned pipelines

* added dev and simple cv models, added caching to inference pipeline

* added stain deconvolution

* fixed image loading for grey images

* fixed normalization of patches

* moved stand alone notebooks to dir, dropped specialists, refactored pipelines

* fixed pipelines updated configs

* added kaggle notebooks, small refactor in pipelines, preprocessing clean up

* Update augmentation.py

* corrections in configs

* imports optimized, removed plot_list function from utils.py

* corrections

* bug fix

* added color_seq_RGB

* Update neptune.yaml

* drop_big_artifacts (#67)

* Dev external data (#68)

* init

* added new postpro

* local

* patching works

* added test time augmentation

* cropping bugs fixed

* fixed callbacks volatile error, updated config, dropped debug from main

* dropped loader pickling

* added pad if smaller

* added more augmentation to the patching seq

* added mosaic padding to loaders updated augmentations

* added dev mode, updated config, added specialists with patching

* fixed mosaic loader bug

* Update main.py

dropped debug saving

* updated postprocessing, added fixes to patching

* updated postprocessing, added devmode, fixed loaders, changed mask preprocessing to get full masks and internal contours

* fixed mosaic for larger patches, adjusted min blob size in postpro

* pipelines with specialists and mulit with patching are working, dropped 0 channel load from loaders, minor fixes in loss def

* added small random crop/pads, fixed pipelines for no patching mode, added simple validation mode

* added artifact images to train

* added global seeding

* fixed checkersboard effect

* added normalization

* added blur to augmentations, added wireframe of scaling pipeline, reverted to vanila postprocessing

* added trainable rescaling loop

* fixed contour regeneration bug

* refactored contour generation, upgraded contour generation in rescaling, cleaned pipelines

* added dev and simple cv models, added caching to inference pipeline

* added stain deconvolution

* fixed image loading for grey images

* fixed normalization of patches

* moved stand alone notebooks to dir, dropped specialists, refactored pipelines

* fixed pipelines updated configs

* added kaggle notebooks, small refactor in pipelines, preprocessing clean up

* added generation of matadata and corresponding target masks for external datasets, updated configs

* updated augmentation

* fixed train valid split for vgg clustering version

* fixed train valid split on clusters with external

* optimized imports, dropped plot_list() from utils.py

* added color_seq_RGB

* corrected best_configs

* bug fix

* added dummy load save to base transformer and dropped redundant stuff… (#69)

* added dummy load save to base transformer and dropped redundant stuff, added chunking

* Update postprocessing.py

* Update preprocessing.py

* Dev stage2 (#74)

* added run end to end with configs, addec competition_stage parameter

* added postpro dev to pipeline

* Update neptune_rescaled_patched.yaml

* Update neptune_rescaled_patched.yaml

* Update neptune_rescaled_patched.yaml

* Update neptune_size_estimator.yaml

* Update run_end_to_end.sh
  • Loading branch information
Kamil A. Kaczmarek authored Jul 19, 2018
1 parent d6ac80c commit 1c1914d
Show file tree
Hide file tree
Showing 30 changed files with 2,830 additions and 925 deletions.
130 changes: 112 additions & 18 deletions augmentation.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import numpy as np
from imgaug import augmenters as iaa

affine_seq = iaa.Sequential([
Expand All @@ -14,23 +15,116 @@
], random_order=True)

color_seq = iaa.Sequential([
# Color
iaa.OneOf([
iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(0, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(1, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(2, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.WithChannels(0, iaa.Add((0, 100))),
iaa.WithChannels(1, iaa.Add((0, 100))),
iaa.WithChannels(2, iaa.Add((0, 100)))
])
iaa.Sometimes(0.5, iaa.OneOf([iaa.AverageBlur(k=((5, 11), (5, 11))),
iaa.AdditiveGaussianNoise(scale=0.05 * 255, per_channel=0.5)
]))
], random_order=True)

color_seq_RGB = iaa.Sequential([
iaa.SomeOf((1, 2),
[iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(0, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(1, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.Sequential([
iaa.ChangeColorspace(from_colorspace="RGB", to_colorspace="HSV"),
iaa.WithChannels(2, iaa.Add((0, 100))),
iaa.ChangeColorspace(from_colorspace="HSV", to_colorspace="RGB")]),
iaa.WithChannels(0, iaa.Add((0, 100))),
iaa.WithChannels(1, iaa.Add((0, 100))),
iaa.WithChannels(2, iaa.Add((0, 100)))]
),
iaa.Sometimes(0.5, iaa.OneOf([iaa.AverageBlur(k=((5, 11), (5, 11))),
iaa.AdditiveGaussianNoise(scale=0.05 * 255, per_channel=0.5)])
)
], random_order=True)


def patching_seq(crop_size):
h, w = crop_size

seq = iaa.Sequential([
iaa.Affine(rotate=(0, 360)),
CropFixed(px=h),
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Sometimes(0.5, iaa.CropAndPad(percent=(-0.1, 0.1), pad_cval=0)),
iaa.Sometimes(0.5, iaa.PiecewiseAffine(scale=(0.02, 0.06)))
], random_order=False)
return seq


class CropFixed(iaa.Augmenter):
def __init__(self, px=None, name=None, deterministic=False, random_state=None):
super(CropFixed, self).__init__(name=name, deterministic=deterministic, random_state=random_state)
self.px = px

def _augment_images(self, images, random_state, parents, hooks):

result = []
seeds = random_state.randint(0, 10 ** 6, (len(images),))
for i, image in enumerate(images):
seed = seeds[i]
image_cr = self._random_crop_or_pad(seed, image)
result.append(image_cr)
return result

def _augment_keypoints(self, keypoints_on_images, random_state, parents, hooks):
result = []
return result

def _random_crop_or_pad(self, seed, image):
height, width = image.shape[:2]

if height <= self.px and width > self.px:
image_processed = self._random_crop(seed, image, crop_h=False, crop_w=True)
image_processed = self._pad(image_processed)
elif height > self.px and width <= self.px:
image_processed = self._random_crop(seed, image, crop_h=True, crop_w=False)
image_processed = self._pad(image_processed)
elif height <= self.px and width <= self.px:
image_processed = self._pad(image)
else:
image_processed = self._random_crop(seed, image, crop_h=True, crop_w=True)
return image_processed

def _random_crop(self, seed, image, crop_h=True, crop_w=True):
height, width = image.shape[:2]

if crop_h:
np.random.seed(seed)
crop_top = np.random.randint(height - self.px)
crop_bottom = crop_top + self.px
else:
crop_top, crop_bottom = (0, height)

if crop_w:
np.random.seed(seed + 1)
crop_left = np.random.randint(width - self.px)
crop_right = crop_left + self.px
else:
crop_left, crop_right = (0, width)

if len(image.shape) == 2:
image_cropped = image[crop_top:crop_bottom, crop_left:crop_right]
else:
image_cropped = image[crop_top:crop_bottom, crop_left:crop_right, :]
return image_cropped

def _pad(self, image):
if len(image.shape) == 2:
height, width = image.shape
image_padded = np.zeros((max(height, self.px), max(width, self.px))).astype(np.uint8)
image_padded[:height, :width] = image
else:
height, width, channels = image.shape
image_padded = np.zeros((max(height, self.px), max(width, self.px), channels)).astype(np.uint8)
image_padded[:height, :width, :] = image
return image_padded

def get_parameters(self):
return []
117 changes: 117 additions & 0 deletions best_configs/neptune_rescaled_patched.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
project-key: DSB

name: dsb_open_solution
tags: [solution_5]

metric:
channel: 'Final Validation Score'
goal: maximize

#Comment out if not in Cloud Environment
pip-requirements-file: requirements.txt

exclude:
- .git
- .idea
- .ipynb_checkpoints
- output
- imgs
- neptune.log
- offline_job.log
- notebooks

parameters:
# Cloud Environment
data_dir: /public/dsb_2018_data/
meta_dir: /public/dsb_2018_data/
external_data_dirs: /public/dsb_2018_data/external_data/
masks_overlayed_dir: /public/dsb_2018_data/masks_overlayed/
contours_overlayed_dir: /public/dsb_2018_data/contours_overlayed/
centers_overlayed_dir: /public/dsb_2018_data/centers_overlayed/
experiment_dir: /output/dsb/experiments/

# Local Environment
# data_dir: /path/to/data
# meta_dir: /path/to/data
# external_data_dirs: /path/to/external/data
# masks_overlayed_dir: /path/to/masks_overlayed
# contours_overlayed_dir: /path/to/contours_overlayed
# centers_overlayed_dir: /path/to/centers_overlayed
# experiment_dir: /path/to/work/dir

# General parameters
valid_category_ids: '[0, 1]'
overwrite: 0
num_workers: 4
load_in_memory: 1
pin_memory: 1
use_patching: 1
patching_stride: 256

# Image parameters (size estimator)
size_estimator__image_h: 512
size_estimator__image_w: 512
size_estimator__image_channels: 1

# U-Net parameters (size estimator)
size_estimator__nr_unet_outputs: 3
size_estimator__n_filters: 16
size_estimator__conv_kernel: 3
size_estimator__pool_kernel: 3
size_estimator__pool_stride: 2
size_estimator__repeat_blocks: 4

# U-Net loss weights (size estimator)
size_estimator__mask: 0.75
size_estimator__contour: 1.0
size_estimator__center: 0.25
size_estimator__bce_mask: 1.0
size_estimator__dice_mask: 1.0
size_estimator__bce_contour: 1.0
size_estimator__dice_contour: 1.0
size_estimator__bce_center: 1.0
size_estimator__dice_center: 1.0

# Image parameters (multi-output)
image_h: 512
image_w: 512
image_channels: 1

# U-Net parameters (multi-output)
nr_unet_outputs: 3
n_filters: 16
conv_kernel: 3
pool_kernel: 3
pool_stride: 2
repeat_blocks: 4

# U-Net loss weights (multi-output)
mask: 0.75
contour: 1.0
center: 0.25
bce_mask: 1.0
dice_mask: 1.0
bce_contour: 1.0
dice_contour: 1.0
bce_center: 1.0
dice_center: 1.0

# Training schedule
epochs_nr: 1000
batch_size_train: 4
batch_size_inference: 4
lr: 0.0002
momentum: 0.9
gamma: 1.0
patience: 50

# Regularization
use_batch_norm: 1
l2_reg_conv: 0.00005
l2_reg_dense: 0.0
dropout_conv: 0.1
dropout_dense: 0.0

# Postprocessing
threshold: 0.5
min_nuclei_size: 20
117 changes: 117 additions & 0 deletions best_configs/neptune_size_estimator_training.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
project-key: DSB

name: dsb_open_solution
tags: [solution_5]

metric:
channel: 'Final Validation Score'
goal: maximize

#Comment out if not in Cloud Environment
pip-requirements-file: requirements.txt

exclude:
- .git
- .idea
- .ipynb_checkpoints
- output
- imgs
- neptune.log
- offline_job.log
- notebooks

parameters:
# Cloud Environment
data_dir: /public/dsb_2018_data/
meta_dir: /public/dsb_2018_data/
external_data_dirs: /public/dsb_2018_data/external_data/
masks_overlayed_dir: /public/dsb_2018_data/masks_overlayed/
contours_overlayed_dir: /public/dsb_2018_data/contours_overlayed/
centers_overlayed_dir: /public/dsb_2018_data/centers_overlayed/
experiment_dir: /output/dsb/experiments/

# Local Environment
# data_dir: /path/to/data
# meta_dir: /path/to/data
# external_data_dirs: /path/to/external/data
# masks_overlayed_dir: /path/to/masks_overlayed
# contours_overlayed_dir: /path/to/contours_overlayed
# centers_overlayed_dir: /path/to/centers_overlayed
# experiment_dir: /path/to/work/dir

# General parameters
valid_category_ids: '[0, 1]'
overwrite: 1
num_workers: 4
load_in_memory: 1
pin_memory: 1
use_patching: 1
patching_stride: 256

# Image parameters (size estimator)
size_estimator__image_h: 512
size_estimator__image_w: 512
size_estimator__image_channels: 1

# U-Net parameters (size estimator)
size_estimator__nr_unet_outputs: 3
size_estimator__n_filters: 16
size_estimator__conv_kernel: 3
size_estimator__pool_kernel: 3
size_estimator__pool_stride: 2
size_estimator__repeat_blocks: 4

# U-Net loss weights (size estimator)
size_estimator__mask: 0.75
size_estimator__contour: 1.0
size_estimator__center: 0.25
size_estimator__bce_mask: 1.0
size_estimator__dice_mask: 1.0
size_estimator__bce_contour: 1.0
size_estimator__dice_contour: 1.0
size_estimator__bce_center: 1.0
size_estimator__dice_center: 1.0

# Image parameters (multi-output)
image_h: 512
image_w: 512
image_channels: 1

# U-Net parameters (multi-output)
nr_unet_outputs: 3
n_filters: 16
conv_kernel: 3
pool_kernel: 3
pool_stride: 2
repeat_blocks: 4

# U-Net loss weights (multi-output)
mask: 0.75
contour: 1.0
center: 0.25
bce_mask: 1.0
dice_mask: 1.0
bce_contour: 1.0
dice_contour: 1.0
bce_center: 1.0
dice_center: 1.0

# Training schedule
epochs_nr: 1000
batch_size_train: 4
batch_size_inference: 4
lr: 0.0002
momentum: 0.9
gamma: 1.0
patience: 50

# Regularization
use_batch_norm: 1
l2_reg_conv: 0.00005
l2_reg_dense: 0.0
dropout_conv: 0.1
dropout_dense: 0.0

# Postprocessing
threshold: 0.5
min_nuclei_size: 20
9 changes: 4 additions & 5 deletions callbacks.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
from PIL import Image
import numpy as np
import torch
from torch.autograd import Variable
from PIL import Image
from deepsense import neptune

from torch.autograd import Variable

from steps.pytorch.callbacks import NeptuneMonitor
from utils import sigmoid
Expand Down Expand Up @@ -56,9 +55,9 @@ def get_prediction_masks(self):
targets_tensors = data[1:]

if torch.cuda.is_available():
X = Variable(X).cuda()
X = Variable(X, volatile=True).cuda()
else:
X = Variable(X)
X = Variable(X, volatile=True)

outputs_batch = self.model(X)
if len(outputs_batch) == len(self.output_names):
Expand Down
Loading

0 comments on commit 1c1914d

Please sign in to comment.