Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation update #184

Merged
merged 63 commits into from
Dec 23, 2021
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
42410c2
Documentation
Aakanksha-Rana Nov 16, 2021
78c8f7e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 16, 2021
7eb022a
documentation
Aakanksha-Rana Nov 18, 2021
a8104c8
Rename nobrainer/spatial_transforms.py to nobrainer/transforms/spatia…
Aakanksha-Rana Nov 18, 2021
f7358d4
Rename nobrainer/transforms/spatial_transforms.py to nobrainer/spatia…
Aakanksha-Rana Nov 18, 2021
2adb1a6
Merge branch 'master' into main_branch
satra Nov 18, 2021
e77e43a
Documentation
Aakanksha-Rana Nov 18, 2021
9bb331e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 18, 2021
cb5a41c
documentation
Aakanksha-Rana Nov 18, 2021
0ed47be
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 18, 2021
cbaf68f
Documentation
Aakanksha-Rana Nov 19, 2021
5247b80
Documentation
Aakanksha-Rana Nov 19, 2021
ed922d1
documentation
Aakanksha-Rana Nov 19, 2021
ab2ee8d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 19, 2021
82e9d46
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 19, 2021
03360c3
documentation
Aakanksha-Rana Nov 19, 2021
b7a8f63
documentation
Aakanksha-Rana Nov 19, 2021
760b9ec
documentation
Aakanksha-Rana Nov 19, 2021
1a02d17
documentation
Aakanksha-Rana Nov 19, 2021
cdcea86
documentation
Aakanksha-Rana Nov 19, 2021
de630d6
documentation
Aakanksha-Rana Nov 19, 2021
ed8bf15
documentation
Aakanksha-Rana Nov 19, 2021
3ff8887
Merge branch 'master' into main_branch
satra Dec 6, 2021
7cf8ffd
Merge branch 'neuronets:master' into main_branch
Aakanksha-Rana Dec 7, 2021
43d4cc7
Update nobrainer/spatial_transforms.py
Aakanksha-Rana Dec 10, 2021
ab5dc42
Update nobrainer/spatial_transforms.py
Aakanksha-Rana Dec 10, 2021
7c61903
Update nobrainer/spatial_transforms.py
Aakanksha-Rana Dec 10, 2021
3e17dfa
Update nobrainer/models/vnet.py
Aakanksha-Rana Dec 10, 2021
25a2f34
Update nobrainer/spatial_transforms.py
Aakanksha-Rana Dec 10, 2021
72dfc57
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2021
45614e5
imporved docstrings
Aakanksha-Rana Dec 13, 2021
dd23e37
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
b878edc
improved docstring
Aakanksha-Rana Dec 13, 2021
b28eb94
improved docstring
Aakanksha-Rana Dec 13, 2021
780c64c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
cd0a0dd
improved docstring
Aakanksha-Rana Dec 13, 2021
2f472cf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
7f50ae1
Update bayesian_utils.py
Aakanksha-Rana Dec 13, 2021
e255357
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
e528a15
Update intensity_transforms.py
Aakanksha-Rana Dec 13, 2021
ffdb14a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
8247c2f
Update spatial_transforms.py
Aakanksha-Rana Dec 13, 2021
472f81a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2021
52c24ec
Update vnet.py
Aakanksha-Rana Dec 17, 2021
72024ae
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 17, 2021
16dc969
docstrings models and block functions
Aakanksha-Rana Dec 17, 2021
3149d93
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 17, 2021
9fbe78b
docstrings models complete
Aakanksha-Rana Dec 17, 2021
118258e
Merge branch 'master' into main_branch
satra Dec 18, 2021
dda43a0
docstrings
Aakanksha-Rana Dec 22, 2021
891fb31
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
9c9b9d5
docstrings
Aakanksha-Rana Dec 22, 2021
4986e45
docstrings
Aakanksha-Rana Dec 22, 2021
0db77fd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
2404179
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
6f9cb34
docstrings
Aakanksha-Rana Dec 22, 2021
77ab81e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
4dad6dc
docstrings
Aakanksha-Rana Dec 22, 2021
a1dc846
docstrings
Aakanksha-Rana Dec 22, 2021
0f0b92d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
0fc2ee3
docstrings
Aakanksha-Rana Dec 22, 2021
f861a8d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
b6076cf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 22, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 21 additions & 1 deletion nobrainer/bayesian_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,13 @@ def default_loc_scale_fn(
untransformed_scale_constraint=None,
weightnorm=False,
):
"""
Aakanksha-Rana marked this conversation as resolved.
Show resolved Hide resolved
This function creates `mean`, `std`
and weightnorm parameters for variational layers
"""

def _fn(dtype, shape, name, trainable, add_variable_fn):
"""Creates `loc`, `scale` parameters."""
"""Creates `loc`, `scale` and weightnorm parameters."""
loc = add_variable_fn(
name=name + "_loc",
shape=shape,
Expand Down Expand Up @@ -85,6 +90,17 @@ def default_mean_field_normal_fn(
untransformed_scale_constraint=None,
weightnorm=False,
):
"""
This function sets layers: deterministic and variational
args:
is_singular(Boolean): True sets deterministic layers, False for variational
loc_initializer: mean kernal initializer.
untransformed_scale_initializer: standard deviation kernal initializer
loc_regularizer: mean kernal regularizer. Deafult= None, options= l1,l2
untransformed_scale_regularizer: standard deviation kernal regulaizer
loc_constraint and untransformed_scale_constraint expects tf constraint functions
weightnorm(Boolean): Sets weightnorm on mean kernal. Default(False).
"""
loc_scale_fn = default_loc_scale_fn(
is_singular=is_singular,
loc_initializer=loc_initializer,
Expand All @@ -109,6 +125,8 @@ def _fn(dtype, shape, name, trainable, add_variable_fn):


def divergence_fn_bayesian(prior_std, examples_per_epoch):
"""Scaled KLD function for ELBO loss with examples per epochs as scaling parameter"""

def divergence_fn(q, p, _):
log_probs = tfd.LogNormal(0.0, prior_std).log_prob(p.stddev())
out = tfd.kl_divergence(q, p) - tf.reduce_sum(log_probs)
Expand All @@ -118,6 +136,8 @@ def divergence_fn(q, p, _):


def prior_fn_for_bayesian(init_scale_mean=-1, init_scale_std=0.1):
"""Set priors for the variational Layers (with a possibility of trainable priors)"""

def prior_fn(dtype, shape, name, _, add_variable_fn):
untransformed_scale = add_variable_fn(
name=name + "_untransformed_scale",
Expand Down
47 changes: 46 additions & 1 deletion nobrainer/intensity_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,15 @@

def addGaussianNoise(x, y=None, trans_xy=False, noise_mean=0.0, noise_std=0.1):
"""
Adds gaussian noise to 3D tensor and label
Add Gaussian Noise to the Input and label
Input x is a tensor or numpy to have rank 3,
Label y is a tensor or numpy to have rank 3,
noise_mean and Noise_std are parameters for Noise addition,
Args:
noise_mean (int): Default = 0.0;
noise_std (int): Default=0.1;
trans_xy(Boolean): transforms both x and y. If set True, function
will require both x,y.
"""
if ~tf.is_tensor(x):
x = tf.convert_to_tensor(x)
Expand All @@ -26,6 +34,14 @@ def addGaussianNoise(x, y=None, trans_xy=False, noise_mean=0.0, noise_std=0.1):


def minmaxIntensityScaling(x, y=None, trans_xy=False):
"""
Intensity Scaling between 0-1
Input x is a tensor or numpy to have rank 3,
Label y is a tensor or numpy to have rank 3,
Args:
trans_xy(Boolean): transforms both x and y. If set True, function
will require both x,y.
"""
if ~tf.is_tensor(x):
x = tf.convert_to_tensor(x)
x = tf.cast(x, tf.float32)
Expand All @@ -50,6 +66,17 @@ def minmaxIntensityScaling(x, y=None, trans_xy=False):


def customIntensityScaling(x, y=None, trans_xy=False, scale_x=[0.0, 1.0], scale_y=None):
"""
Custom Intensity Scaling
Input x is a tensor or numpy to have rank 3,
Label y is a tensor or numpy to have rank 3,
Args:
trans_xy(Boolean): transforms both x and y (Default: False).
If set True, function
will require both x,y.
scale_x: [minimum(int), maximum(int)]
scale_y: [minimum(int), maximum(int)]
"""
x_norm, y_norm = minmaxIntensityScaling(x, y, trans_xy)
minx = tf.cast(
tf.convert_to_tensor(scale_x[0] * np.ones(x_norm.shape).astype(np.float32)),
Expand Down Expand Up @@ -84,6 +111,15 @@ def customIntensityScaling(x, y=None, trans_xy=False, scale_x=[0.0, 1.0], scale_


def intensityMasking(x, mask_x, y=None, trans_xy=False, mask_y=None):
"""
Masking the Intensity values in Input and Label
Input x is a tensor or numpy array to have rank 3,
Label y is a tensor or numpy array to have rank 3,
mask_x is a tensor or numpy array of same shape as x
Args:
trans_xy(Boolean): transforms both x and y (Default: False).
If set True, function will require both x,y.
"""
if ~tf.is_tensor(x):
x = tf.convert_to_tensor(x)
x = tf.cast(x, tf.float32)
Expand Down Expand Up @@ -111,6 +147,15 @@ def intensityMasking(x, mask_x, y=None, trans_xy=False, mask_y=None):


def contrastAdjust(x, y=None, trans_xy=False, gamma=1.0):
"""
Contrast Adjustment
Input x is a tensor or numpy array to have rank 3,
Label y is a tensor or numpy array to have rank 3,
gamma is contrast adjustment constant
Args:
trans_xy(Boolean): transforms both x and y (Default: False).
If set True, function will require both x,y.
"""
if ~tf.is_tensor(x):
x = tf.convert_to_tensor(x)
x = tf.cast(x, tf.float32)
Expand Down
22 changes: 20 additions & 2 deletions nobrainer/models/bayesian_vnet.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Full bayesian adaptation of the Vnet model from https://arxiv.org/pdf/1606.04797.pdf
# Model definition of bayesian adaptation of the Vnet model
# from https://arxiv.org/pdf/1606.04797.pdf
from tensorflow.keras.layers import Input, MaxPooling3D, UpSampling3D, concatenate
from tensorflow.keras.models import Model
import tensorflow_probability as tfp
Expand All @@ -17,6 +18,7 @@ def down_stage(
activation="relu",
padding="SAME",
):
# encoding blocks of the model
conv = tfp.layers.Convolution3DFlipout(
filters,
kernel_size,
Expand Down Expand Up @@ -52,6 +54,7 @@ def up_stage(
activation="relu",
padding="SAME",
):
# decoding blocks of the VNet model
up = UpSampling3D()(inputs)
up = tfp.layers.Convolution3DFlipout(
filters,
Expand Down Expand Up @@ -101,6 +104,7 @@ def end_stage(
activation="relu",
padding="SAME",
):
# Last logit layer
conv = tfp.layers.Convolution3DFlipout(
n_classes,
kernel_size,
Expand Down Expand Up @@ -141,7 +145,21 @@ def bayesian_vnet(
activation="relu",
padding="SAME",
):

"""
Instantiate a 3D Bayesian VNet Architecture
Encoder and Decoder has 3D Flipout(variational layers)
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
kernal_size(int): size of the kernal of conv layers
activation(str): all tf.keras.activations are allowed
kld: KL Divergence function default(None)
it can be set to others -->(lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors.
kernel_posterior_fn:a func to initlaize kernal posteriors
(loc, scale and weightnorms)
See Bayesian Utils for more options for kld, prior_fn and kernal_posterior_fn
"""
inputs = Input(input_shape)

conv1, pool1 = down_stage(
Expand Down
22 changes: 21 additions & 1 deletion nobrainer/models/bayesian_vnet_semi.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Model definition for a Semi-Bayesian VNet with deterministic
# encoder and Bayesian decoder
from tensorflow.keras.layers import (
Conv3D,
Input,
Expand All @@ -15,6 +17,7 @@


def down_stage(inputs, filters, kernel_size=3, activation="relu", padding="SAME"):
# encoding blocks of the model
conv = Conv3D(filters, kernel_size, activation=activation, padding=padding)(inputs)
conv = GroupNormalization()(conv)
conv = Conv3D(filters, kernel_size, activation=activation, padding=padding)(conv)
Expand All @@ -34,6 +37,7 @@ def up_stage(
activation="relu",
padding="SAME",
):
# decoding blocks of the VNet model
up = UpSampling3D()(inputs)
up = tfp.layers.Convolution3DFlipout(
filters,
Expand Down Expand Up @@ -83,6 +87,7 @@ def end_stage(
activation="relu",
padding="SAME",
):
# last logit layer
conv = tfp.layers.Convolution3DFlipout(
n_classes,
kernel_size,
Expand Down Expand Up @@ -123,7 +128,22 @@ def bayesian_vnet_semi(
activation="relu",
padding="SAME",
):

"""
Instantiate a 3D Semi-Bayesian VNet Architecture
Encoder has 3D Convolutional layers
and Decoder has 3D Flipout(variational layers)
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
kernal_size(int): size of the kernal of conv layers
activation(str): all tf.keras.activations are allowed
kld: KL Divergence function default(None)
it can be set to -->(lambda q, p, ignore: kl_lib.kl_divergence(q, p))
prior_fn: a func to initialize priors.
kernel_posterior_fn:a func to initlaize kernal posteriors
(loc, scale and weightnorms)
See Bayesian Utils for options for kld, prior_fn and kernal_posterior_fn
"""
inputs = Input(input_shape)

conv1, pool1 = down_stage(
Expand Down
13 changes: 10 additions & 3 deletions nobrainer/models/highresnet.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
"""Model definition for HighResNet.

Implemented according to the [HighResNet manuscript](https://arxiv.org/abs/1707.01992).
"""

import tensorflow as tf
Expand All @@ -12,7 +10,16 @@
def highresnet(
n_classes, input_shape, activation="relu", dropout_rate=0, name="highresnet"
):
"""Instantiate HighResNet model."""
"""
Instantiate a 3D HighResnet Architecture.
Implementation is according to the
https://arxiv.org/abs/1707.01992
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
activation(str): all tf.keras.activations are allowed
dropout_rate(int): [0,1].
"""

conv_kwds = {"kernel_size": (3, 3, 3), "padding": "same"}

Expand Down
16 changes: 12 additions & 4 deletions nobrainer/models/unet.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
"""Model definition for 3D U-Net.

Implemented according to the [3D U-Net manuscript](https://arxiv.org/abs/1606.06650)
"""Model definition for UNet.
"""

import tensorflow as tf
from tensorflow.keras import layers

Expand All @@ -14,7 +13,16 @@ def unet(
batch_size=None,
name="unet",
):
"""Instantiate 3D U-Net architecture."""
"""
Instantiate a 3D UNet Architecture
UNet model: a 3D deep neural network model from
https://arxiv.org/abs/1606.06650
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
activation(str): all tf.keras.activations are allowed
batch_size(int): batch size.
"""

conv_kwds = {
"kernel_size": (3, 3, 3),
Expand Down
18 changes: 17 additions & 1 deletion nobrainer/models/vnet.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Adaptation of the Vnet model from https://arxiv.org/pdf/1606.04797.pdf with dropouts and
# Adaptation of the VNet model from https://arxiv.org/pdf/1606.04797.pdf
# This 3D deep neural network model is regularized with 3D spatial dropout
# and Group normalization.

from tensorflow.keras.layers import (
Conv3D,
Expand All @@ -14,6 +16,7 @@


def down_stage(inputs, filters, kernel_size=3, activation="relu", padding="SAME"):
# encoding blocks of the VNet model
convd = Conv3D(filters, kernel_size, activation=activation, padding=padding)(inputs)
convd = GroupNormalization()(convd)
convd = Conv3D(filters, kernel_size, activation=activation, padding=padding)(convd)
Expand All @@ -23,6 +26,7 @@ def down_stage(inputs, filters, kernel_size=3, activation="relu", padding="SAME"


def up_stage(inputs, skip, filters, kernel_size=3, activation="relu", padding="SAME"):
# decoding blocks of the VNet model
up = UpSampling3D()(inputs)
up = Conv3D(filters, 2, activation=activation, padding=padding)(up)
up = GroupNormalization()(up)
Expand All @@ -40,6 +44,7 @@ def up_stage(inputs, skip, filters, kernel_size=3, activation="relu", padding="S


def end_stage(inputs, n_classes=1, kernel_size=3, activation="relu", padding="SAME"):
# last logit layer
conv = Conv3D(
filters=n_classes,
kernel_size=kernel_size,
Expand All @@ -62,6 +67,17 @@ def vnet(
padding="SAME",
**kwargs
):
"""
Instantiate a 3D VNet Architecture
VNet model: a 3D deep neural network model adapted from
Aakanksha-Rana marked this conversation as resolved.
Show resolved Hide resolved
https://arxiv.org/pdf/1606.04797.pdf
adatptations include groupnorm and spatial dropout.
Args:
n_classes(int): number of classes
input_shape(tuple):four ints representating the shape of 3D input
kernal_size(int): size of the kernal of conv layers
activation(str): all tf.keras.activations are allowed.
"""
inputs = Input(input_shape)

conv1, pool1 = down_stage(
Expand Down
Loading