You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We concluded with the following two reasons which might have caused the inconsistency:
Due to the different padding sizes with different batch sizes. (It is verified by inspecting the output data.)
Due to the different float point precision: trainer.test computes the average loss under float32 within c++ codes, while inference.infer does the averaging under float64 in Python codes.
Experiments are run with DeepSpeech2 on Paddle.
The text was updated successfully, but these errors were encountered: