-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks with BundleAdjustmentModels.jl #134
Comments
I'll investigate your benchmarks as soon as possible, but I just want to quickly comment on the following:
We've encountered some compile time problems in SCT |
Absolutely! We're actively looking for more large-scale problems to test and benchmark on! |
I will open a different discussion / issue but it should be also quite easy to benchmark the OPF problems of |
Note that this was mainly due to the |
Does this issue still persist? I tried to run the code above but got a MethodError on line 10: julia> nproblems = size(df)[1]
ERROR: MethodError: no method matching size(::JLD2.ReconstructedMutable{:DataFrame, (:columns, :colindex), Tuple{Any, DataFrames.Index}}) |
Bump @amontoison |
@adrhill I will have a look asap. Off-topic: I was talking with Michel Schenen this morning about this package and he has two comments:
|
This is actually a really fun aspect of this package: SCT has very small memory requirements since we only need to keep track of index sets of non-zero values. Our default type is Thanks for the reference by the way, that looks intriguing! |
And if you (against all odds) manage to run into a memory limit, we have an undocumented shared memory mode in which all tracers share a single set. ;) |
Yes, the authors did get mixed up. I see, you use index sets. That requires allocations (speed), right? It would be nice to have a method for using the underlying AD tools without index sets. For large problems like PDEs, I think index sets are prohibitive. Also, index sets won't work on GPUs. Why would one not just run on the CPU for the sparsity pattern? Well, in some cases, one might use a totally different algorithm on the GPU. The datastructures might be quite different. Edit: But I see how this is totally fine for JuMP problems. |
This tooling indeed already exists. Our tracer types are just thin wrappers around what we call As for GPUs, we have some ideas but decided to get a publication for this package out of the door first (#67). I don't see why GPU support would be impossible if we introduce a suitable statically sized pattern type. |
I think this random probing might be a feature of DifferentiationInterface.jl instead. I already have a To clarify, SparseConnectivityTracer.jl does not run actual AD, it uses a different brand of abstract program interpretation which only tracks linear and nonlinear dependencies in scalar quantities. It seems extremely stupid and very allocation-intensive, but it's already much faster than what we had before so why not 🤷 |
Good idea! |
SCT can be thought of as a standalone binary forward-mode AD backend.
SCT is essentially a binary version of ForwardDiff.jl. It is no more “stupid” than ForwardDiff and certainly less allocation-intensive. |
It's much more allocation-intensive. Scalars in ForwardDiff are |
What we could (should?) add to this package is a mechanism for "chunked" index sets, similar to ForwardDiff. This could greatly reduce the memory usage of SCT for large problems. |
Essentially we need a set type that is a statically-sized version of And then we can either use one of those if it can fit enough inputs, or run the function repeatedly while grouping the inputs in chunks of a multiple of 64. Now I get it |
Who would have thought a simple issue bump could turn out to be so inspiring! :) Let's return to the BundleAdjustmentModels benchmarks here. |
@adrhill You need an older version of I should upgrade it in |
@adrhill @gdalle
I tried to run some benchmarks on the NLS problems using
BundleAdjustmentModels.jl
, but I'm encountering issues with the sparsity detection of any Hessian. It takes an extremely long time. This could be a valuable set of tests for you because the problems are quite large. For the largest problems, my computer also crashes during the sparsity pattern detection of some Jacobian.Note that I replaced
norm(r)
withsqrt(dot(r, r))
in the code (see this PR) to use version 5.x ofSparseConnectivityTracer.jl
. Guillaume, the Jacobian/Hessian of these problems could also be useful forSparseMatrixColorings.jl
.Note that you need
hessian_residual_backend = ADNLPModels.SparseADHessian
to test the Hessian.The text was updated successfully, but these errors were encountered: