You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some QONNX and FINN flows require data layout inference, i.e. explicitly identify which tensor dimension is the batch, channels, image height/width and similar. See InferDataLayouts for more details on how this currently works.
The current approach is quite ad-hoc and breaks for almost any previously-unseen op or pattern, potentially causing invalid transformations later on due to incorrect data layout annotations. We should consider a major overhaul of data layout inference to address this.
Some thoughts on the matter:
It may not even be possible to always define sane data layouts for all tensors in an arbitrary NN topology. As just one example, it is possible to create arbitrary dimensions with Reshape that don't clearly correspond to any meaningful dimension.
We don't necessarily care about identifying all dimensions as part of data layout inference - mostly, the location of the channels dimension for convnets, and perhaps the location of the batch dimension are the ones that matter. So perhaps an overhauled system should allow for incomplete specifications like ['N', '_', '_', 'C'].
Certain ops (due to how they are defined in the standard) dictate where the batch and channels dimensions are located, e.g. BatchNormalization and Conv. This could be used to propagate layout annotations both backwards and forwards in the graph. If it clashes with the result from the propagation of user-provided input annotation, this should raise an error.
The text was updated successfully, but these errors were encountered:
Some QONNX and FINN flows require data layout inference, i.e. explicitly identify which tensor dimension is the batch, channels, image height/width and similar. See InferDataLayouts for more details on how this currently works.
The current approach is quite ad-hoc and breaks for almost any previously-unseen op or pattern, potentially causing invalid transformations later on due to incorrect data layout annotations. We should consider a major overhaul of data layout inference to address this.
Some thoughts on the matter:
Reshape
that don't clearly correspond to any meaningful dimension.['N', '_', '_', 'C']
.BatchNormalization
andConv
. This could be used to propagate layout annotations both backwards and forwards in the graph. If it clashes with the result from the propagation of user-provided input annotation, this should raise an error.The text was updated successfully, but these errors were encountered: