Authors: Rene Winchenbach, Nils Thuerey
Accepted at: International Conference on Learning Representation (ICLR) 2024 - Vienna (as a Poster)
Repository: https://github.com/tum-pbs/SFBC ArXiV Paper: https://arxiv.org/abs/2403.16680
As basis Functions our code base supports:
- Radial Basis Functions ('gaussian', 'multiquadric', 'inverse_quadric', 'inverse_multiquadric', 'polyharmonic', 'bump')
- Interpolation Schemes ('linear' [This is the approach used by the CConv paper by Ummenhofer et al], 'square' [Nearest Neighbor Interpolation])
- B-Spline Schemes ('cubic_spline' [This is the SplineConv basis from Fey et al], 'quartic_spline', 'quintic_spline')
- SPH Kernels ('wendland2', 'wendland4', 'wendland6', 'poly6', 'spiky')
All of these can be either called as they are ('rbf x'), normalized to be partitions of unity by widening their shape parameter ('abf x'), or normalized by dividing by the shape function ('ubf x'). To replicate the ressults of Fey et al using 'abf cubic_spline' is necessary, for example.
We also offer:
- Fourier Terms ('ffourier' [Our primary approach], 'fourier' [which drops the first antisymmetric term]), each with suffixes ' even' and ' odd' to use even and odd symmetric terms
- Chebyshev Terms ('chebyshev', 'chebyshev2')
- An antisymmetric enforcement term ('dmcf' [This is the approach by Prantl et al])
The code provides two primary classes BasisConv and BasisNetwork. The former is an individual basis convolution layer and the second is the network setup used for our publication.
The BasisConv class has the following arguments (src/BasisConvolution/convLayerv2.py):
BasisConv(inputFeatures: int, outputFeatures: int, dim: int = 2,
basisTerms = [4, 4], basisFunction = 'linear', basisPeriodicity = False,
linearLayerActive = False, linearLayerHiddenLayout = [32, 32], linearLayerActivation = 'relu',
biasActive= False, feedThrough = False,
preActivation None, postActivation = None, cutlassBatchSize = 16,
cutlassNormalization = False, initializer = 'uniform', optimizeWeights = False, exponentialDecay = False
)
The BasisNetwork class has the following arguments (src/BasisConvolution/convNetv2.py):
BasisNetwork(self, fluidFeatures, boundaryFeatures = 0, layers = [32,64,64,2],
denseLayer = True,
activation = 'relu', coordinateMapping = 'cartesian',
dims = [8], rbfs = ['linear', 'linear'], windowFn = None,
batchSize = 32, ignoreCenter = True,
normalized = False, outputScaling = 1/128,
layerMLP = False, MLPLayout = [32,32],
convBias = False, outputBias = True,
initializer = 'uniform', optimizeWeights = False, exponentialDecay = True,
inputEncoder = None, outputDecoder = None, edgeMLP = None, vertexMLP = None):
For more information, see the respective source files.
The primary forward function of the network can be called as
model(fluidFeatures, fi, fj, fluidEdgeLengths,
boundaryFeatures, bf, bb, boundaryEdgeLenghts)
For this call fluidFeatures are per-vertex features for the primary point cloud and boundaryFeatures are per-vertex features for the secondary point cloud. [fi, fj] are the adjacency matrix of the primary point cloud in COO format and [bf, bb] are the adjacency matrix from the secondary (bb) to the primary (bf) point cloud. fluidEdgeLengths and boundaryEdgeLengths are the relative distances between nodes normalized by the node support radius (i.e., in the range of
As an example for training, see notebooks/exampleTraining. This notebook contains a simple training script that learns the SPH density kernel function for any of the four included datasets in a small ablation study. Here's an example result of the training for test case II with 5 different basis functions (Fourier, Fourier even terms only, Fourier odd terms only, Linear and Chebyshev) with 3 different basis term counts (2,4,8):
You can also find an example of this ablation study on Google Colab
This paper included four datasets in its evalutions. You can find a tool to visualize the datasets under notebooks/datasetVisualizer. Summary information:
Test Case | Scenario | Size | Link |
---|---|---|---|
I | compressible 1D | 7.9GByte | https://huggingface.co/datasets/Wi-Re/SFBC_dataset_I |
II | WCSPH 2D | 45 GByte | https://huggingface.co/datasets/Wi-Re/SFBC_dataset_II |
III | IISPH 2D | 2.1 GByte | https://huggingface.co/datasets/Wi-Re/SFBC_dataset_III |
IV | 3D Toy | 1.2 GByte | https://huggingface.co/datasets/Wi-Re/SFBC_dataset_IV |
This test case was a pseudo-compressible 1D SPH simulation with random initial conditions. The dataset comprises 36 files with 2048 timesteps and 2048 particles each. Example:
You can find the dataset here (size approximately 7.9 GByte), the dataset is also a submodule in this repo under datasets/SFBC_dataset_I.
This test case was a weakly-compressible 2D SPH simulation with random initial conditions and enclosed by a rigid boundary. The dataset comprises 36 simulations for training and 16 for testing each with 4096 timesteps and 4096 particles. Example:
You can find the dataset here (size approximately 45 GByte), the dataset is also a submodule in this repo under datasets/SFBC_dataset_II.
This test case was an incompressible 2D SPH simulation where to randomly sized blobs of liquid collide in free-space. The dataset comprises 64 simulations for training and 4 for testing each with 128 timesteps and 4096 particles. Example:
You can find the dataset here (size approximately 2.1 GByte), the dataset is also a submodule in this repo under datasets/SFBC_dataset_III.
The last test case is a toy-problem to evaluate SPH kernel learning in a 3D setting. For this setup we sampled 4096 particles in a
You can find the dataset here (size approximately 1.2 GByte), the dataset is also a submodule in this repo under datasets/SFBC_dataset_IV.
To setup a conda environment for this code base simply run:
conda create -n torch_sfbc python=3.11 -y
conda activate torch_sfbc
conda install -c anaconda ipykernel -y
conda install -c "nvidia/label/cuda-12.1.0" cuda-toolkit cudnn -y
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia -y
pip install tqdm seaborn pandas matplotlib numpy tomli msgpack msgpack-numpy portalocker h5py zstandard ipykernel ipympl
pip install scipy scikit-image scikit-learn
If you would like to use a faster and less memory intensive neighbor search to build your networks, especially for larger simulations, consider using our torchCompactRadius package (pip install torchCompactRadius
) which uses a C++/Cuda implementation of a compact hash map based neigbor search. Note that this module performs a just in time compilation on its first use for new configurations and may take some time (it will time out your colab instance as a free user).
This work was supported by the DFG Individual Research Grant TH 2034/1-2.