Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up kronecker product #70

Closed
wants to merge 2 commits into from
Closed

Conversation

hitomitak
Copy link
Contributor

Rewrote Kronecker product calculation to not use np.kron function. This is because using this numpy function for vector calculations is very computationally expensive. This PR can speed up the reconstruction time as the following image.

Experiment Environment:

  • Cut random circuits of 29 and 30 qubit
  • max_subcircuit_width=10, max_cut=10

image

@CLAassistant
Copy link

CLAassistant commented Mar 28, 2023

CLA assistant check
All committers have signed the CLA.

@garrison
Copy link
Member

garrison commented Mar 28, 2023

Thanks for this. To be honest, it's kind of disappointing that numpy's kron is not at least as efficient as the code you give when it is passed 1D arrays.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 4544439588

  • 0 of 1 (0.0%) changed or added relevant line in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 84.778%

Changes Missing Coverage Covered Lines Changed/Added Lines %
circuit_knitting_toolbox/circuit_cutting/wire_cutting/wire_cutting_post_processing.py 0 1 0.0%
Totals Coverage Status
Change from base Build 4317758493: 0.0%
Covered Lines: 1526
Relevant Lines: 1800

💛 - Coveralls

@garrison
Copy link
Member

garrison commented Apr 7, 2023

I was able to reproduce this in a simple test case using numpy 1.22.2:

>>> import timeit
>>> import numpy as np
>>> a = np.random.rand(1000)
>>> b = np.random.rand(1000)
>>> timeit.timeit(lambda: np.kron(a, b), number=200)
1.302669789060019
>>> timeit.timeit(lambda: (a[:,None] * b[None,:]).reshape(a.size * b.size), number=200)
0.25691784801892936
>>> np.allclose(np.kron(a, b), (a[:,None] * b[None,:]).reshape(a.size * b.size))
True
>>> np.__version__
'1.22.2'

However, if I try on numpy 1.24.2, the performance gap closes, and the current version on main might actually be a bit faster:

>>> import timeit
>>> import numpy as np
>>> a = np.random.rand(1000)
>>> b = np.random.rand(1000)
>>> timeit.timeit(lambda: np.kron(a, b), number=200)
0.2618443979881704
>>> timeit.timeit(lambda: (a[:,None] * b[None,:]).reshape(a.size * b.size), number=200)
0.30681791296228766
>>> np.allclose(np.kron(a, b), (a[:,None] * b[None,:]).reshape(a.size * b.size))
True
>>> np.__version__
'1.24.2'

Have you tried upgrading numpy?

@garrison
Copy link
Member

garrison commented Apr 7, 2023

One alternative idea is to warn in this function if somebody is using an old numpy that is known to be slow.

@garrison
Copy link
Member

garrison commented Apr 7, 2023

My best guess is that the performance improvement is due to numpy/numpy#21354.

It might be better here to just require the first numpy release that includes that commit (1.23, I expect) as the minimum supported by the toolbox, rather than to emit a warning.

@hitomitak
Copy link
Contributor Author

Thank you for your investigation. I was using an older version of Numpy and was not aware of the improved performance. II agree with @garrison, so I closed this PR.

@hitomitak hitomitak closed this Apr 10, 2023
@garrison
Copy link
Member

I just tried on numpy 1.23.0, and the performance is essentially the same between those two methods. I would support modifying pyproject.toml to require numpy>=1.23.0.

@garrison garrison mentioned this pull request Jun 23, 2023
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants