Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: working on benchmarks with Finch as backend #772

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

DeaMariaLeon
Copy link
Collaborator

What type of PR is this? (check all applicable)

  • πŸ’Ύ Refactor
  • πŸͺ„ Feature
  • 🐞 Bug Fix
  • πŸ”§ Optimization
  • πŸ“š Documentation
  • πŸ§ͺ Test
  • πŸ› οΈ Other

Related issues

  • Related issue #
  • Closes #

Checklist

  • Code follows style guide
  • Tests added
  • Documented the changes

Please explain your changes below.

Only added Finch on config.py - (locally I see the names of the backend. But I'm not sure if this renders fine on CodSpeed).
Also, per Hameer instructions I need to see which tests fail with Finch.
I would like to see if the change on codspeed.yml works like expected.
Finally, I have only added the line if hasattr(sparse, "compiled"): f = sparse.compiled(f) to the first two benchmarks to make sure I correctly understood.

Copy link

codspeed-hq bot commented Sep 10, 2024

CodSpeed Performance Report

Merging #772 will not alter performance

Comparing DeaMariaLeon:bench2 (f822c24) with main (47b246b)

Summary

βœ… 27 untouched benchmarks

πŸ†• 313 new benchmarks
⁉️ 313 dropped benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark main DeaMariaLeon:bench2 Change
πŸ†• test_elemwise[backend='Numba'-side=100-rank=1-format='coo'-add] N/A 3.5 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=1-format='coo'-mul] N/A 2.7 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=1-format='gcxs'-add] N/A 4.1 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=1-format='gcxs'-mul] N/A 3.3 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=2-format='coo'-add] N/A 4 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=2-format='coo'-mul] N/A 3 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=2-format='gcxs'-add] N/A 7.9 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=2-format='gcxs'-mul] N/A 6.8 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=3-format='coo'-add] N/A 22 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=3-format='coo'-mul] N/A 9.6 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=3-format='gcxs'-add] N/A 32.9 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=100-rank=3-format='gcxs'-mul] N/A 16.9 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=1-format='coo'-add] N/A 3.5 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=1-format='coo'-mul] N/A 2.7 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=1-format='gcxs'-add] N/A 4.2 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=1-format='gcxs'-mul] N/A 3.3 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=2-format='coo'-add] N/A 19.9 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=2-format='coo'-mul] N/A 8.9 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=2-format='gcxs'-add] N/A 28.7 ms N/A
πŸ†• test_elemwise[backend='Numba'-side=1000-rank=2-format='gcxs'-mul] N/A 14.1 ms N/A
... ... ... ... ...

ℹ️ Only the first 20 benchmarks are displayed. Go to the app to view all benchmarks.

Copy link
Collaborator

@hameerabbasi hameerabbasi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @DeaMariaLeon! A few suggestions.

benchmarks/conftest.py Outdated Show resolved Hide resolved
benchmarks/test_benchmark_coo.py Outdated Show resolved Hide resolved
benchmarks/test_benchmark_coo.py Show resolved Hide resolved
@DeaMariaLeon DeaMariaLeon marked this pull request as ready for review September 16, 2024 13:44
@hameerabbasi
Copy link
Collaborator

It seems the Finch benchmarks don't work: https://github.com/pydata/sparse/actions/runs/10884899040/job/30201213331?pr=772#step:6:142

m, n, p = sides

if m * n >= max_size or n * p >= max_size:
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size:
pytest.skip()

rng = np.random.default_rng(seed=seed)
x = sparse.random((m, n), density=DENSITY, format=format, random_state=rng)
y = sparse.random((n, p), density=DENSITY, format=format, random_state=rng)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the tests on file test_benchmark_coo.py were meant to fail with Finch.
Finch sparse.random (used to build x & y, line 25 and 26) is different, because it doesn't have the format argument that is used with Numba.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtsokol How hard would it be to add this to finch?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can probably use sparse.asarray(..., format=<format>)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we just need a format=nothing default and then a reformatting line inside finch.random?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That'd suffice -- if the format isn't what the format arg demands, reformat.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a way of specifying format currently?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +28 to +29
if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
f = operator.matmul
if hasattr(sparse, "compiled"):
f = sparse.compiled(f)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)
def f(x, y):
return x @ y
if hasattr(sparse, "compiled"):
f = sparse.compiled(f)

pytest.skip()

rng = np.random.default_rng(seed=seed)
x = sparse.random((m, n), density=DENSITY, format=format, random_state=rng)
y = sparse.random((n, p), density=DENSITY, format=format, random_state=rng)

if hasattr(sparse, "compiled"):
operator.matmul = sparse.compiled(operator.matmul)

x @ y # Numba compilation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x @ y # Numba compilation
f(x, y) # Compilation

m, n, p = sides

if m * n >= max_size or n * p >= max_size:
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size:
if m * n >= max_size or n * p >= max_size or m * n * DENSITY <= min_size or n * p * DENSITY <= min_size:

@hameerabbasi
Copy link
Collaborator

backend should be the first argument in all tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants