Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI benchmarking #54

Closed
adrhill opened this issue May 7, 2024 · 3 comments · Fixed by #58
Closed

Add CI benchmarking #54

adrhill opened this issue May 7, 2024 · 3 comments · Fixed by #58

Comments

@adrhill
Copy link
Owner

adrhill commented May 7, 2024

No description provided.

@gdalle
Copy link
Collaborator

gdalle commented May 9, 2024

For the benchmarks, all the test cases we have are very shallow: one layer of convolution, one iteration of the brusselator. It would make sense to add functions with a deeper computational graph, so that we observe the effects of e.g. recursive versus reallocating sets.

My suggestion for the prototypical Jacobian tracing benchmark would be iterated multiplication by random sparse matrices:

function f(x; p, l)
    n = length(x)
    y = copy(x)
    for _ in 1:l
        A = sprand(n, n, p)
        y = A * y
    end
    return y
end

We then have only three parameters to vary: n (dimension), p (sparsity) and l (depth).

@adrhill
Copy link
Owner Author

adrhill commented May 9, 2024

Good point. We could also integrate the Brusselator ODE using SimpleDiffEq.jl and evaluate a small LeNet5 CNN.

@adrhill
Copy link
Owner Author

adrhill commented May 9, 2024

Primal value comparisons on MaxPool layers require local tracers, so the LeNet5 benchmark will have to wait for #57.

@adrhill adrhill mentioned this issue May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants