You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
George/Jim found that the sending time from forecast tasks to write tasks can reach to 4.5 s for each data transfer in the high resolution global and regional run. Gerhard was working on the ESMF snapshot 8.0.1 to fix the problem, the improvement includes:
write all messages from sending PEs going to the same dst PET into a single buffer and then send the whole message using single MPI_Isend
optimization with option to drop the buffer for esmf for memory relief,
optimization of memory copies on the send side
also when using large value of the srcTermProcessing argument can reduce data volume and further reduce time, but for global fv3 Gaussisan grid, we will still keep srcTermProcessing=0. for regional FV3, a different value can be used. srcTermProcessing will be added in model_configure with default value 0, and fv3_cap will be updated with the ESMF_VMEpochStart future.
The feature shows the time of sending data is reduced from 4.5s to 0.5s.
The text was updated successfully, but these errors were encountered:
George/Jim found that the sending time from forecast tasks to write tasks can reach to 4.5 s for each data transfer in the high resolution global and regional run. Gerhard was working on the ESMF snapshot 8.0.1 to fix the problem, the improvement includes:
also when using large value of the srcTermProcessing argument can reduce data volume and further reduce time, but for global fv3 Gaussisan grid, we will still keep srcTermProcessing=0. for regional FV3, a different value can be used. srcTermProcessing will be added in model_configure with default value 0, and fv3_cap will be updated with the ESMF_VMEpochStart future.
The feature shows the time of sending data is reduced from 4.5s to 0.5s.
The text was updated successfully, but these errors were encountered: