-
New function
is_writeable_array
adds transparent support for readonly arrays, such as JAX arrays or numpy arrays with.flags.writeable=False
. -
asarray(..., copy=None)
withdask
backend always copies, so thatcopy=None
andcopy=True
are equivalent for thedask
backend. This change is made to be forward compatible with thedask==2024.12
release.
-
array_namespace
accepts (and ignores)None
and python scalars (int, float, complex, bool). This change is to simplify downstream adoption, for functions where arguments can be either arrays or scalars. -
vecdot
conjugates its first argument, as stipulated by the Array API spec. Previously, conjation if the first argument was missing.
__array_api_version__
for the wrapped APIs is now set to2023.12
.
-
Wrap
sign
so that it always uses the standard definition for complex numbers, and always propagates nans. -
Wrap dask.array.fft.
-
Readd
python_requires
to the package metadata.
-
New helper functions to determine if a namespace is from a given library ({func}
~.is_numpy_namespace
, {func}~.is_torch_namespace
, etc.). -
More support for the 2023.12 version of the standard. This includes
- Wrappers for
cumulative_sum()
. - Wrappers for
unstack()
. - Update floating-point type promotion in
sum()
,prod()
, andtrace()
to be inline with the 2023.12 specification (32-bit types no longer promote to 64-bit whendtype=None
). - Add the inspection
APIs
to the wrapped namespaces. These can be accessed with
xp.__array_namespace_info__()
. - Various fixes to the
clip()
wrappers.
- Wrappers for
-
torch.conj
now wrappstorch.conj_physical
, which makes a copy rather than setting the conjugation bit, as arrays with the conjugation bit set do not support some APIs. -
torch.sign
is now wrapped to support complex numbers and propogate nans properly.
-
NumPy 2.0 is now wrapped again. Previously it was unwrapped because it has full 2022.12 array API support but it now requires wrapping again for 2023.12 support.
-
Support for JAX 0.4.32 and newer which implements the array API directly in
jax.numpy
. -
hypot
,minimum
, andmaximum
(new in 2023.12) are wrapped in PyTorch to support proper scalar type promotion.
-
Add support for ndonnx. Array API support itself lives in the ndonnx library, but this adds the {func}
~.is_ndonnx_array
helper function. (@adityagoel4512). -
Partial support for the 2023.12 version of the standard. This includes
- Wrappers for
clip()
. - torch wrapper for
copysign()
with correct type promotion.
Note that many of the new functions in the 2023.12 version of the standard are already fully implemented in upstream libraries and will already work.
- Wrappers for
- Fix a typo in setup.py (@sunpoet).
-
Add support for
sparse
. Note that unlike other array libraries, array-api-compat does not contain any wrappers forsparse
functions. Allsparse
array API support is insparse
itself. Thus, there is noarray_api_compat.sparse
submodule, andarray_namespace(<pydata/sparse array>)
returns thesparse
module. -
Added the function
is_pydata_sparse_array(x)
.
-
Fix JAX
float0
arrays. See jax-ml/jax#20620. (@NeilGirdhar) -
Fix
torch.linalg.vector_norm()
whenaxis=()
. -
Fix
torch.linalg.solve()
to apply the array API standard rules for whenx2
should be treated as a vector vs. a matrix. -
Fix PyTorch test failures on CI by skipping uint16, uint32, uint64 tests.
-
Drop support for Python 3.8.
-
NumPy 2.0 is now left completely unwrapped.
-
New flag
use_compat
to {func}~.array_namespace
to force the use or non-use of the compat wrapper namespace. The default is to return a compat namespace when it is appropiate. -
Fix the
copy
flag toasarray
for NumPy, CuPy, and Dask. -
Fix the
device
flag toasarray
for CuPy. -
Fix various issues with
asarray
for Dask.
-
Test Python 3.12 on CI.
-
Add more tests for {func}
~.array_namespace
. -
Add more tests for
asarray
. -
Add a test that there are no hard dependencies.
-
Add HTML documentation. Includes new documentation on the scope of the package and new developer documentation.
-
Fix
array_api_compat.numpy.asarray(torch.Tensor)
to return a NumPy array. -
Allow Python scalars in torch functions.
-
Fix the
torch.std
wrapper when correction is anint
. -
Fix issues with
qr
andsvd
in the Dask wrappers.
-
Add support for Dask (@lithomas1).
-
Add support for JAX. Note that unlike other array libraries, array-api-compat does not contain any wrappers for JAX functions. All JAX array API support is in JAX itself. Thus, there is no
array_api_compat.jax
submodule, andarray_namespace(<JAX array>)
returns thejax.experimental.array_api
module. -
The functions
is_numpy_array(x)
,is_cupy_array(x)
,is_torch_array(x)
,is_dask_array(x)
,is_jax_array(x)
are now part of the publicarray_api_compat
API. -
Add wrappers for the
fft
extension module for NumPy, CuPy, and PyTorch.
-
Allow
'2022.12'
as theapi_version
in {func}~.array_namespace()
.'2021.12'
is also supported but will issue a warning since the returned namespace will still be a 2022.12 compliant one. -
Add wrapper for numpy.linalg.solve, which broadcasts the inputs according to the standard.
-
Add wrappers for various PyTorch linalg functions.
-
Fix a bug with
numpy.linalg.vector_norm(keepdims=True)
. -
BREAKING: Update
vecdot
wrappers to applyaxes
before broadcasting, not after. This matches the updated 2023.12 standard wording, and also the behavior of the newnumpy.vecdot
gufunc in NumPy 2.0. -
Fix some linalg functions which were supposed to be in both the main namespace and the linalg extension namespace.
-
Add Ruff to CI. (@adonath)
-
Test that internal definitions of
__all__
are self-consistent, which should help to avoid issues where wrappers are accidentally not exported to the compat namespaces properly.
-
Add support for the upcoming NumPy 2.0 release.
-
Added a torch wrapper for
trace
(torch.trace
doesn't support theoffset
argument or stacking) -
Wrap numpy, cupy, and torch
nonzero
to raise an error for zero-dimensional input arrays. -
Add torch wrapper for
newaxis
. -
Improve error message for
array_namespace
-
Fix linalg.cholesky returning the conjugate of the expected upper decomposition for numpy and cupy.
- Releases are now made with GitHub Actions (thanks @matthewfeickert).
-
Fix
torch.result_type()
cross-kind promotion (@lucascolley). -
Fix the torch.take() wrapper to make axis optional for ndim = 1.
-
Add requires-python metadata to the package (@matthewfeickert).
- Add 2022.12 standard support.
This includes things like adding complex dtype support, adding the new
take
function, and various minor changes in the specification.
-
Support
"cpu"
in CuPyto_device()
. -
Return a new array in NumPy/CuPy
reshape(copy=False)
. -
Fix signatures for PyTorch
broadcast_to
andpermute_dims
.
-
Support the linalg extension in the
array_api_compat.torch
namespace. -
Add
isdtype()
.
- Fix the
k
keyword argument totril
andtriu
intorch
.
- Rename
get_namespace()
toarray_namespace()
(get_namespace()
is maintained as a backwards compatible alias).
-
The minimum supported NumPy version is now 1.21. Fixed a few issues with NumPy 1.21 (with
unique_*
andasarray
), although there are also a few known issues with this version (see the README). -
Add
api_version
toget_namespace()
. -
array_namespace()
(néeget_namespace()
) now works correctly withtorch
tensors. -
array_namespace()
(néeget_namespace()
) now works correctly withnumpy.array_api
arrays. -
array_namespace()
(néeget_namespace()
) now raisesTypeError
instead ofValueError
. -
Fix the
torch.std
wrapper. -
Add
torch
wrappers forones
,empty
, andzeros
so thatshape
can be passed as a keyword argument.
-
Added support for PyTorch.
-
Add helper function
size()
(required if torch is used astorch.Tensor.size
is a method that is incompatible with the array API.size
). -
All wrapper functions that wrap existing library functions now pass through arbitrary
**kwargs
.
-
Added CI to run against the array API testsuite.
-
Fix
sort(stable=False)
andargsort(stable=False)
with CuPy.
- Initial release. Includes support for NumPy and CuPy.