-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-place operations #1134
Comments
In general, I strongly prefer immutable objects and pure functions. Your proposal is exactly the opposite of that, so there must be a really good reason for it. For matrices, reuse may seriously improve performance, which can be a valid reason. We should validate whether this is indeed the case though, and get to know how much we could gain. From a practical point of view: it would require a huge amount of refactoring, we should keep that in mind. |
You're spot on regarding the trade-off between the benefits of immutability and improved performance. I'm from a scientific computing background, so I'll happily throw out the former for the latter. But of course you might come to a different conclusion (which is perfectly fine!). It probably boils down to deciding whether AFAIK, there is currently no de-facto standard JS library for scientific computing. Besides |
Yeah, first we should do a benchmark to see how many impact it would have. @torfsen how does numpy do this? Does numpy have such functions? Just thinking aloud: maybe we could consider An other direction is that we could have such functions as methods on the Matrix class but not as pure functions, i.e. methods can be mutable but functions not. There are already immutable methods on Matrix like |
I believe a much greater improvement to performance would come from using something like numpy's ndarray to store vectors and matrices rather than javscript arrays and therefore that should be a priority. Additionally, I would argue for more meaningful method names such as |
See #760 |
Please don't forget the And what do you think about the |
@harrysarson good point. we should really first put these ideas for improvement in perspective. See also the |
@josdejong NumPy has a complex array architecture and supports in-place updates for example via the standard operators. To be honest I didn't even know about the |
I was thinking a bit more about these operators Most important arguments I think are:
|
Going to reference this in the "future of matrices" discussion #2702 so closing it. |
Currently, if I want to add a vector
y
to an existing vectorx
and store the result inx
I need to doThis means that new storage has to be allocated to hold the result of the addition. I'd like to avoid that.
I suggest in-place variants of those operations where it makes sense, e.g.
math.iadd
:From an implementation point of view,
add(x, y)
could easily be implemented viaiadd
by something likeAlmost all arithmetic, bitwise, complex, logical, trigonometry and some of the matrix and set functions could benefit from an in-place variant.
The text was updated successfully, but these errors were encountered: