-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Translations and ideas for extensions #5
Comments
Regarding
Do you mean 'baked in' like a unified I do have a private, experimental 'lazy' autoray module which is like a very minimal, lightweight computational graph - you can perform an entire autoray computation symbolically, do some basic simplifications/reuse, then optionally compute later. The motivation is partly just that libraries like
The other is that it adds more overhead to However if it had some crucial use case and little to no overhead I could be tempted. |
RIght,
I wasn't aware JIT worked so well with autray. Maybe it's actually a good idea to document this then? Same for autograd.
I do think the overhead would be minimal, but I can't think of a particular convincing use case. If you're creating a lot of arrays it's annoying to always supply |
Another thing I just noticed is that when calling ar.do("where", X==0), for numpy and torch we get a list of tuples; one for each dimension of the array, whereas for tensorflow we get a (Nxd) array, with d the number of dimensions. |
Yeah I'll definitely have a think about this. One might even just want |
In addition to the
take
translation I added in my previous PR, there is some more that might be good to add. At least, I am using these myself. I can make a PR.split
. The syntax is different for numpy and tensorflow/torch. The former wants the number of splits or an array of locations of splits, whereas tensorflow/torch either want the number of splits or an array of split sizes. We can go from one format the other usingnp.diff
diff
. This is implemented in tensorflow astf.experimental.numpy.diff
, and not implemented at all for torch. This also means I don't know what the cleanest way is to implementsplit
mentioned above. Maybe just usingnp.diff
and then convert to array of right backend if necessary?linalg.norm
, seems to work with tensorflow, but for torch we need to do_SUBMODULE_ALIASES["torch", "linalg.norm"] = "torch"
I didn't check these things for any other libraries.
Maybe a bit of an overly ambitious idea, but have you ever thought about baking in support for JIT? Right now it seems that for TensorFlow everything works with eager execution, and I'm not sure you can compile the computation graphs resulting from a series of ar.do calls.
PyTorch also support JIT to some extend with TorchScript
Numpy doesn' t have JIT, but there is Numba
Cupy has an interface with Numba that does seem to allow JIT.
JAX has support for JIT
Another thing is gradients. Several of these libraries have automatic gradients, and having an autoray interface for doing computations with automatic gradients would be fantastic as well (although probably also ambitious).
If you think these things are doable at all, I wouldn't mind spending some time to try to figure out how this could work.
Less ambitiously, you did mention in #3 that something along the lines of
would be pretty nice. I can try to do this. This probably comes down to checking for a global flag in
ar.do
after the lineThe text was updated successfully, but these errors were encountered: