You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Offline training is doing. With more DFT calls, the Time of Update GP (s) increases a lot shown in figures.
And the train_hyps fixed as 'train_hyps = [20, 30]' to prevent the hyperparameters optimization.
Best,
Li Yuke
The text was updated successfully, but these errors were encountered:
Since GPs require matrix inversion, they scale cubically in the number of training points. For FLARE, hundreds of structures is considered a fairly large training set. Make sure that you only add the structures and environments actually needed (as determined by the uncertainty).
You may be able to improve the performance by playing around with OpenMP and BLAS parallelization settings. Generally, MKL gives the best performance.
Hi Flare developer,
Offline training is doing. With more DFT calls, the Time of Update GP (s) increases a lot shown in figures.
And the train_hyps fixed as 'train_hyps = [20, 30]' to prevent the hyperparameters optimization.
Best,
Li Yuke
The text was updated successfully, but these errors were encountered: