You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The text was updated successfully, but these errors were encountered:
snadampal
changed the title
[tensorflow-pytorch-aarch64--r22.09] Data accuracy issues with TensorFlow neoverse docker images
[tensorflow-pytorch-aarch64--r22.09] Data accuracy issues with TensorFlow docker images
Sep 19, 2022
snadampal
changed the title
[tensorflow-pytorch-aarch64--r22.09] Data accuracy issues with TensorFlow docker images
[tensorflow-pytorch-aarch64--r22.09] TensorFlow docker images are broken
Sep 19, 2022
Thanks for the report @snadampal, we'll look into it.
Our builds include a few accuracy tests, on the Python examples, on the CPP examples, and on the MLCommons examples - however, the MLCommons accuracy test employs --count=1 expecting 100% accuracy.
Could you provide more details of the failure? Run lines, logs etc. and environment (what platform, is 'fast maths' enabled, etc.)
Issue Description
The tensorflow docker images built from r22.09 tag with onednn/acl as well as those available on docker hub (https://hub.docker.com/r/armswdev/tensorflow-arm-neoverse:r22.09-tf-2.10.0-onednn-acl or latest) are producing incorrect results for MLPerf resnet50 model.
the last working tag was: tensorflow-pytorch-aarch64--r22.08
TF2.10 official wheel works fine, so the issue is with one of the staging patches maintained on top of TF 2.10.
https://github.com/ARM-software/Tool-Solutions/tree/main/docker/tensorflow-aarch64/patches
How to reproduce
docker pull armswdev/tensorflow-arm-neoverse
follow this section to run MLPerf resnet50 inference with "--accuracy" option.
https://github.com/ARM-software/Tool-Solutions/blob/main/docker/tensorflow-aarch64/examples/README.md#mlcommons-tm-benchmarks
The text was updated successfully, but these errors were encountered: