-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
distribute with MKL? #6
Comments
It is probably not worth the hassle - need to maintain the licenses when they expire, OS issues, etc. What kind of performance difference are we talking about? |
Right now we're statically linking reference BLAS/LAPACK and using MUMPS for sparse linear algebra, so I wouldn't be surprised if there's a factor of 5 speedup possible without even considering multiple threads, but I should actually benchmark it. |
Any idea why MUMPS is so much better? |
Worse, you mean? |
I was a bit surprised the last time I compared Mumps to Pardiso (this was using Basel Pardiso, before we got MKL Pardiso working in Ipopt https://projects.coin-or.org/Ipopt/ticket/216 - wouldn't expect MKL Pardiso to be much different). These were all using MKL for Blas, allocating threads to the linear solver for the last 4 solvers, or to Blas for the first 4. I have some data somewhere comparing different Blas implementations but IIRC it wasn't much more than 20% difference. These conclusions are very problem-dependent though, it's all down to how large the dense sub-blocks get during the multifrontal sparse solve. |
Anyone know what the license looks like on Matlab Compiler Runtime? They ship MA57 for sparse |
I meant how does MUMPS compare to UMFPACK for lpopt, or does lpopt not support calling UMFPACK? |
UMFPACK doesn't work for Ipopt because Ipopt needs to check the inertia of the symmetric indefinite KKT matrix to ensure descent properties (it does a regularization and re-factorizes if it doesn't get the expected inertia). LU and Cholesky won't give you that, only Bunch-Kaufman LDL. Those 8 linear solvers above are pretty much an exhaustive list of usable candidates for what Ipopt needs, with the exception of TAUCS. Several years ago they looked at a TAUCS interface, but my understanding is the performance wasn't good enough to be worth keeping around. If you're solving a convex problem you can do block-wise Cholesky so optimization codes for QP/SOCP/SDP have more choices of linear algebra libraries, but Ipopt is designed for general possibly non-convex problems. |
Got it. Thanks for the explanation. |
Interesting results. Using the matlab compiler runtime sounds sketchy. What about compiling and statically linking our own 32-bit integer version of OpenBlas? |
We could also pay for a binary redistribution license, assuming Julia as an organization has some resources. The HSL folks write really good code and it's worth supporting them. Might be able to get a better deal than whatever they charged Mathworks. Statically linked LP64 OpenBlas should work (just don't forget I think getting Julia issue 4923 sorted would be preferable in the long run, having ILP64 OpenBlas with prefixes on all the functions and shared LP64 without prefixes built by Julia but only used by packages. |
We certainly can check with the HSL folks. If the licensing is possible with a reasonable cost, I am sure we can find a way to make this work. |
A potential route forward for this is to use |
@ViralBShah mentioned in JuliaLang/julia#4272 that julia has a license to redistribute MKL. Could we use this for the Ipopt binaries? This should give much better default performance than with MUMPS.
@tkelman
The text was updated successfully, but these errors were encountered: