You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to document two aspects regarding nobrainer: working with nobrainer on the Satori cluster & the Kullback-Leibler Divergence computation in losses.py.
To set up a conda environment with nobrainer on Satori :
conda install numpy # do the same for all librairies required : click, scikit-image
python ~/nibabel/setup.py install # install nibabel using its setup.py file
python ~/nobrainer/setup.py install # install nobrainer using its setup.py file
This environment can be used to run nobrainer commands on SLURM nodes.
The Kullback-Leibler divergence is computed using sum(model.losses) in the losses.py nobrainer script. As reported in this TF2.1.0 issue -[https://github.com/What is the purpose of _built_kernel_divergence and _built_bias_divergence? tensorflow/probability#894]-, this way of computing the kl divergence with variational layers gives rise to Symbolic Tensors due to the fact that the _build_kl_divergence attribute of layers is set to False after the first forward pass. An alternative to make things work with the current TF version in conda is to manually reset the _build_kl_divergence attribute of each layer to True after each forward pass. However this is incompatible with multi-GPU training and Distributed Strategy.
The text was updated successfully, but these errors were encountered:
I would like to document two aspects regarding nobrainer: working with nobrainer on the Satori cluster & the Kullback-Leibler Divergence computation in losses.py.
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/
conda create --name wmlce-ea python=3.6
conda activate wmlce-ea
conda install tensorflow=2.1.0=gpu_py36_914.g4f6e601
conda install numpy # do the same for all librairies required : click, scikit-image
python ~/nibabel/setup.py install # install nibabel using its setup.py file
python ~/nobrainer/setup.py install # install nobrainer using its setup.py file
This environment can be used to run nobrainer commands on SLURM nodes.
sum(model.losses)
in the losses.py nobrainer script. As reported in this TF2.1.0 issue -[https://github.com/What is the purpose of _built_kernel_divergence and _built_bias_divergence? tensorflow/probability#894]-, this way of computing the kl divergence with variational layers gives rise to Symbolic Tensors due to the fact that the _build_kl_divergence attribute of layers is set to False after the first forward pass. An alternative to make things work with the current TF version in conda is to manually reset the _build_kl_divergence attribute of each layer to True after each forward pass. However this is incompatible with multi-GPU training and Distributed Strategy.The text was updated successfully, but these errors were encountered: