-
Notifications
You must be signed in to change notification settings - Fork 11
/
README.md~
277 lines (197 loc) · 10.8 KB
/
README.md~
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
# Neural Implicit Flow (NIF): mesh-agnostic dimensionality reduction
![example workflow](https://github.com/pswpswpsw/nif/workflows/Tests/badge.svg)
[![License](https://img.shields.io/github/license/pswpswpsw/nif)](https://github.com/pswpswpsw/nif/blob/main/LICENSE)[![DOI](https://zenodo.org/badge/460537754.svg)](https://zenodo.org/badge/latestdoi/460537754)
<p align="center">
<img src="./misc/myimage.gif" alt="animated" />
</p>
- NIF is a mesh-agnostic dimensionality reduction paradigm for parametric spatial temporal fields. For decades, dimensionality reduction (e.g., proper orthogonal decomposition, convolutional autoencoders) has been the very first step in reduced-order modeling of any large-scale spatial-temporal dynamics.
- Unfortunately, these frameworks are either not extendable to realistic industry scenario, e.g., adaptive mesh refinement, or cannot preceed nonlinear operations without resorting to lossy interpolation on a uniform grid. Details can be found in our [paper](https://arxiv.org/pdf/2204.03216.pdf).
- NIF is built on top of Keras, in order to minimize user's efforts in using the code and maximize the existing functionality in Keras.
## Highlights
<p align="center">
<img src="./misc/compare.gif" alt="animated" />
</p>
- Built on top of **Tensorflow 2.x** with **Keras model subclassing**, hassle-free for many up-to-date advanced concepts and features
```python
from nif import NIF
# set up the configurations, loading dataset, etc...
model_ori = NIF(...)
model_opt = model_ori.build()
model_opt.compile(optimizer, loss='mse')
model_opt.fit(...)
model_opt.predict(...)
```
- **Distributed learning**: data parallelism across multiple GPUs on a single node
```python
enable_multi_gpu = True
cm = tf.distribute.MirroredStrategy().scope() if enable_multi_gpu else contextlib.nullcontext()
with cm:
# ...
model.fit(...)
# ...
```
- Flexible training schedule: e.g., first some standard optimizer (e.g., Adam) then **fine-tunning with L-BFGS**
```python
from nif.optimizers import TFPLBFGS
# load previous model
new_model_ori = NIF(cfg_shape_net, cfg_parameter_net, mixed_policy)
new_model.load_weights(...)
# prepare the dataset
data_feature = ... #
data_label = ... #
# fine tune with L-BFGS
loss_fun = tf.keras.losses.MeanSquaredError()
fine_tuner = TFPLBFGS(new_model, loss_fun, data_feature, data_label, display_epoch=10)
fine_tuner.minimize(rounds=200, max_iter=1000)
new_model.save_weights("./fine-tuned/ckpt")
```
- Templates for many useful customized callbacks
```python
# setting up the model
# ...
# - tensorboard
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./tb-logs", update_freq='epoch')
# - printing, model save checkpoints etc.
class LossAndErrorPrintingCallback(tf.keras.callbacks.Callback):
# ....
# - learning rate schedule
def scheduler(epoch, lr):
if epoch < 1000:
return lr
else:
return 1e-4
scheduler_callback = tf.keras.callbacks.LearningRateScheduler(scheduler)
# - collecting callbacks into model.fit(...)
callbacks = [tensorboard_callback, LossAndErrorPrintingCallback(), scheduler_callback]
model_opt.fit(train_dataset, epochs=nepoch, batch_size=batch_size,
shuffle=False, verbose=0, callbacks=callbacks)
```
- Simple extraction of subnetworks
```python
model_ori = NIF(...)
# ....
# extract latent space encoder network
model_p_to_lr = model_ori.model_p_to_lr()
lr_pred = model_p_to_lr.predict(...)
# extract latent-to-weight network: from latent representation to weights and biase of shapenet
model_lr_to_w = model_ori.model_lr_to_w()
w_pred = model_lr_to_w.predict(...)
# extract shapenet: inputs are weights and spatial coordinates, output is the field of interests
model_x_to_u_given_w = model_ori.model_x_to_u_given_w()
u_pred = model_x_to_u_given_w.predict(...)
```
- Get input-output Jacobian or Hessian.
```python
model = ... # your keras.Model
x = ... # your dataset
# define both the indices of target and source
x_index = [0,1,2,3]
y_index = [0,1,2,3,4]
# wrap up keras.Model using JacobianLayer
from nif.layers import JacobianLayer
y_and_dydx_layer = JacobianLayer(model, y_index, x_index)
y, dydx = y_and_dydx_layer(x)
model_with_jacobian = Model([x], [y, dydx])
# wrap up keras.Model using HessianLayer
from nif.layers import HessianLayer
y_and_dydx_and_dy2dx2_layer = HessianLayer(model, y_index, x_index)
y, dydx, dy2dx2 = y_and_dydx_and_dy2dx2_layer(x)
model_with_jacobian_and_hessian = Model([x], [y, dydx, dy2dx2])
```
- Data normalization for multi-scale problem
- just simply feed `n_para`: number of parameters, `n_x`: input dimension of shapenet, `n_target`: output dimension of shapenet, and `raw_data`: numpy array with shape = `(number of pointwise data points, number of features, target, coordinates, etc.)`
```python
from nif.data import PointWiseData
data_n, mean, std = PointWiseData.minmax_normalize(raw_data=data, n_para=1, n_x=3, n_target=1)
```
- Large-scale training with tfrecord converter
- all you need is to prepare a BIG npz file that contains all the point-wise data
- `.get_tfr_meta_dataset` will read all files under the searched directory that ends with `.tfrecord`
```python
from nif.data.tfr_dataset import TFRDataset
fh = TFRDataset(n_feature=4, n_target=3)
# generating tfrecord files from a single big npz file (say gigabytes)
fh.create_from_npz(...)
# prepare some model
model = ...
model.compile(...)
# obtaining a meta dataset
meta_dataset = fh.get_tfr_meta_dataset(...)
# start sub-dataset-batching
for batch_file in meta_dataset:
batch_dataset = fh.gen_dataset_from_batch_file(batch_file, batch_size)
model.fit(...)
```
- Save and load models (via Checkpoints only)
```python
# save the config
model.save_config("config.json")
# save the weights
model.save_weights("./saved_weights/ckpt-{}/ckpt".format(epoch)")
# load the config
with open("config.json", "r") as f:
config = json.load(f)
model_ori = nif.NIF(**config)
model = model_ori.model()
# load the weights
model.load_weights("./saved_weights/ckpt-999/ckpt")
```
- Network pruning and quantization
## Google Colab Tutorial
1. **Hello world! A simple fitting on 1D travelling wave** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/1_simple_1d_wave.ipynb)
- learn how to use class `nif.NIF`
- model checkpoints/restoration
- mixed precision training
- L-BFGS fine tuning
2. **Tackling multi-scale data** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/2_multi_scale_NIF.ipynb)
- learn how to use class `nif.NIFMultiScale`
- demonstrate the effectiveness of learning high frequency data
3. **Learning linear representation** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/3_multi_scale_linear_NIF.ipynb)
- learn how to use class `nif.NIFMultiScaleLastLayerParameterized`
- demonstrate on a (shortened) flow over a cylinder case from an AMR solver
4. **Getting input-output derivatives is super easy** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/4_get_gradients_by_wrapping_model_with_layer.ipynb)
- learn how to use `nif.layers.JacobianLayer`, `nif.layers.HessianLayer`
5. **Scaling to hundreds of GB data** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/5_large_scale_training_on_tensorflow_record_data.ipynb)
- learn how to use `nif.data.tfr_dataset.TFRDataset` to create `tfrecord` from `npz`
- learn how to perform sub-dataset-batch training with `model.fit`
6. **Revisit NIF on multi-scale data with regularization** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/6_revisit_multi_scale_NIF_with_L1_L2_regularization.ipynb)
- learn how to use L1 or L2 regularization for weights and bias in ParameterNet.
- a demonstration for the failure of NIF-Multiscale in terms of increasing spatial interpolation when dealing with
high-frequency signal.
- this means you need to be cautious about increasing spatial sampling resolution when dealing with
high-frequency signal.
- learn that L2 or L1 regularization does not seem to help resolving the above issue.
7. **NIF Compression** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/7_model_pruning_and_quantization.ipynb)
- learn how to use low magnititute pruning and quantization to compress ParameterNet
8. **Revisit NIF on multi-scale data: Sobolov training helps removing spurious signals** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pswpswpsw/nif/blob/main/tutorial/8_revisit_multi_scale_NIF_with_sobolov_training.ipynb)
- learn how to use `nif.layers.JacobianLayer` to perform Sobolov training
- learn how to monitor different loss terms using customized Keras metrics
- learn that feeding derivative information to the system help resolve the super-resolution issue
## Requirements
```python
matplotlib
numpy
tensorflow_probability==0.18.0
tensorflow_model_optimization==0.7.3
```
## Issues, bugs, requests, ideas
Use the [issues](https://github.com/pswpswpsw/nif/issues) tracker to report bugs.
## How to cite
If you find NIF is helpful to you, you can cite our JMLR [paper](https://www.jmlr.org/papers/volume24/22-0365/22-0365.pdf) in the following bibtex format
```
@article{JMLR:v24:22-0365,
author = {Shaowu Pan and Steven L. Brunton and J. Nathan Kutz},
title = {Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {41},
pages = {1--60},
url = {http://jmlr.org/papers/v24/22-0365.html}
}
```
## Contributors
- [Shaowu Pan](http://www.shaowupan.com)
- [Yaser Afshar](https://github.com/yafshar)
## License
[LGPL-2.1 License](https://github.com/pswpswpsw/nif/blob/main/LICENSE)