-
-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't create big endian dtypes in V3 array #2324
Comments
If the But a bigger problem is that, by making endianness a serialization detail, the zarr dtype model has diverged from the numpy dtype model. If our array object uses zarr v3 data type semantics, then |
One solution could be to always translate the endianness of the on-disk data to the endianness of the in-memory data. This could be done within |
Looks like this either needs resolving, or documenting as a breaking change at #2596 for zarr 3 |
Should we put endianness in the new runtime |
I've moved this to "After 3.0.0" and will be adding this to the work in progress section of the v3 migration docs. |
I'm running into this too - just to check, is this something that is going to be fixed in the 3.0.x series of releases, or is it a breaking change that will not be changed that we should adjust existing code to? |
I think we intend to fix this, but it will force us to revise the semantics of the |
This works with V2 data:
But raises for V3
In the V3 spec, endianness is now handled by a codec: https://zarr-specs.readthedocs.io/en/latest/v3/codecs/bytes/v1.0.html
Xarray tests create data with big endian dtypes, and Zarr needs to know how to handle them.
The text was updated successfully, but these errors were encountered: