Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize operations on matrices containing a single data type #1709

Closed
josdejong opened this issue Jan 8, 2020 · 3 comments
Closed

Optimize operations on matrices containing a single data type #1709

josdejong opened this issue Jan 8, 2020 · 3 comments

Comments

@josdejong
Copy link
Owner

The performance of matrix operations is optimized for matrices containing a single datatype (like numbers). This only applies when a matrix is created passing the datatype. The performance can probably be improved a lot by automatically detecting the datatype of matrix contents on creation.

From this I see two interesting action points:

  1. When creating a matrix, the datatype (like number) should be automatically determined so any calculations on this matrix are way faster.
  2. Maybe the matrix type should be determined lazily.
  3. From the benchmarks, I don't see a difference in performance for det whilst it should be much faster. I think that maybe the datatype information is lost inside, maybe inside lup. If we can fix that it would mean another huge bump in the performance of det :), see Improve performance of determinant #908

For reference: #1154 (comment), #908 (comment)

@cshaa
Copy link
Collaborator

cshaa commented Feb 22, 2020

What if one creates a matrix filled with numbers but then changes one element to a complex number? This is something I would do without thinking there might be side effects. So implicit datatype sounds like a bad idea to me.

On the other hand, what if we added specialized, user-unfriendly functions, which would avoid type-checking and just assume you know what you are doing? These could be used in the performance-critical parts of algorithms as well as by savvy users who need performance. I'm thinking something like:

math.fast.multiplyNumericMatrices = function(A: number[][], B: number[][]): void

You'd pass matrices A and B to the function, it would multiply them and write the result to A.


A little unrelated, but I'll leave it here:
If we wanted to use math.js for operations on huge matrices, the bottleneck would imho be the time needed to allocate and free the huge arrays. For example, the most intensive part of eigs is doing this until the matrix converges to an upper triangular:

let {Q, R} = qr(A)
A = multiply(R, Q)
S = multiply(S, R)

This creates four matrices every iteration. For a 50×50 matrix, this means allocating and freeing at least 80kB of memory every step. It wouldn't have to, if we had functions that mutate matrices instead of creating new ones. But maybe I'm wrong and the JIT compiler reuses the allocated memory efficiently, I'd have to benchmark it.

EDIT: Oh, there's already an issue for it: #1659

@josdejong
Copy link
Owner Author

Thanks for your inputs. Good point that you may have a case where some of the matrix elements are accidentally changed to a different type than say number, and without understanding the reason the performance may drop. I think it should be possible to implement something to make the type explicit and lock it, so it throws an error when you've created a "number" Matrix explicitly, and try to put a complex number in the Matrix.

Let's keep the discussion on inplace matrix operations in #1659 ok?

@gwhitney
Copy link
Collaborator

gwhitney commented Oct 6, 2023

I think this is pretty much subsumed by #2702, so closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants