-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QulacsBackend
#1272
QulacsBackend
#1272
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1272 +/- ##
=======================================
Coverage 99.84% 99.84%
=======================================
Files 75 76 +1
Lines 10807 10844 +37
=======================================
+ Hits 10790 10827 +37
Misses 17 17
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@scarrazza some preliminary results for the qulacs backend, it looks indeed comparable with our `qibojit 1 core8 cores |
Thanks @BrunoLiegiBastonLiegi, could you please include some GPU curves too? |
Yes I am trying to install the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, however we are missing documentation.
I added a small paragraph in the documentation about |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @BrunoLiegiBastonLiegi for the effort!
@@ -435,7 +435,7 @@ def test_measurement_basis_list(backend): | |||
c.add(gates.H(2)) | |||
c.add(gates.X(3)) | |||
c.add(gates.M(0, 1, 2, 3, basis=[gates.X, gates.Z, gates.X, gates.Z])) | |||
result = c(nshots=100) | |||
result = backend.execute_circuit(c, nshots=100) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like a lot the explicit backend usage, but why do you need it for this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because otherwise only the GlobalBackend
is tested here, if I am not mistaken. Anyway, this was failing with the QulacsBackend
, but was not supposed to be testing it, even though it did due to the GlobalBackend
usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that @stavros11 set these tests to run with all the available backends, through a mechanism implemented in conftest.py
Lines 63 to 87 in c68e58b
@pytest.fixture | |
def backend(backend_name): | |
yield get_backend(backend_name) | |
def pytest_generate_tests(metafunc): | |
module_name = metafunc.module.__name__ | |
if module_name == "tests.test_models_distcircuit_execution": | |
config = [(bk, acc) for acc in ACCELERATORS for bk in MULTIGPU_BACKENDS] | |
metafunc.parametrize("backend_name,accelerators", config) | |
else: | |
if "backend_name" in metafunc.fixturenames: | |
if "accelerators" in metafunc.fixturenames: | |
config = [(backend, None) for backend in AVAILABLE_BACKENDS] | |
config.extend( | |
(bk, acc) for acc in ACCELERATORS for bk in MULTIGPU_BACKENDS | |
) | |
metafunc.parametrize("backend_name,accelerators", config) | |
else: | |
metafunc.parametrize("backend_name", AVAILABLE_BACKENDS) | |
elif "accelerators" in metafunc.fixturenames: | |
metafunc.parametrize("accelerators", ACCELERATORS) |
It's true that is using the GlobalBackend
, but it should be also setting the backend registered there to all the possible backends.
So, the backend.execute_circuit(c, *args)
should be fully equivalent to have the backend
fixture calling set_backend()
, and use c(*args)
.
However, that's not the case, but I'm worried that all tests should be updated to be consistent, unless they are calling set_backend()
within the test.
One way or the other, I'd like if there were a unique way of doing things. And in this PR, I'd use the current one, even if it's suboptimal.
If there is truly a better way (which most likely it's true, and it will be the one you're using, or, slightly worse, changing the backend
fixture to set the backend as well), we should open a PR right after this one to implement it consistently over all the tests that are backend
-dependent.
What do you think? @BrunoLiegiBastonLiegi @stavros11
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that @stavros11 set these tests to run with all the available backends, through a mechanism implemented in
conftest.py
If I remember correctly, the backend
fixture does not set the GlobalBackend
. Therefore
c(nshots=100)
in a test that uses the backend
fixture is not equivalent to
backend.execute_circuit(c, nshots=100)
The second will indeed test all different available backends, while the first will only test whatever GlobalBackend
was set at that time. Actually, it will probably repeat the test using the same GlobalBackend
multiple times, because the fixture will still loop over all available backends.
The GlobalBackend
in this case is not very well defined, because if a previous test switches it (using set_backend
) then this value will be used in all upcoming tests, until it is switched by another test. This is usually bad (as is the idea of GlobalBackend
, and more generally global variables), because tests that are supposed to be independent actually depend on what happened in previous tests.
I think the general idea is to avoid using the GlobalBackend
in tests, except for very few tests that exist for testing the GlobalBackend
functionality itself. All tests of other features should use the backend.execute_circuit
approach. If a test needs to use the GlobalBackend
, this should ideally be done as:
def test_...():
original_backend = qibo.get_backend()
qibo.set_backend("backend to test")
# test code here
qibo.set_backend(original_backend)
We could also wrap this to another (or the same?) fixture, but I don't think that exists right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it doesn't, indeed I wrote it (in a bit convoluted way) in my previous comment (that was mainly hypothetical, eventually negating the hypothesis...).
I actually searched, and only a few tests are actually using set_backend()
, most of them just to test it.
Tests using set_backend
❯ : rg set_backend
test_backends_qibotn.py
13: qibo.set_backend(backend="qibotn", platform="qutensornet", runcard=None)
16: qibo.set_backend("numpy")
test_backends_clifford.py
8:from qibo import Circuit, gates, set_backend
35:def test_set_backend(backend):
38: set_backend("clifford", platform=platform)
46: set_backend(backend.name, platform=backend.platform)
52: set_backend("numpy")
test_models_qcnn.py
319: qibo.set_backend("qibojit")
test_backends_global.py
7:def test_set_backend():
11: qibo.set_backend("numpy")
31: qibo.set_backend("numpy")
46: qibo.set_backend("numpy")
83: qibo.set_backend("numpy")
91: qibo.set_backend("numpy")
test_models_dbi.py
6:from qibo import hamiltonians, set_backend
test_backends.py
6:from qibo import construct_backend, gates, list_available_backends, set_backend
130:def test_set_backend_error():
132: set_backend("non-existing-backend")
Many more are already using backend.execute_circuit()
, against my expectation.
So, fine, this PR is aligning to the majority, and that's good. Sorry for doubting it.
We should just fix the other tests (like test_models_qcnn.py
) or remove the import where is not even used (i.e. test_models_dbi.py
, ...).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually searched, and only a few tests are actually using
set_backend()
, most of them just to test it.
This is fine, but there are also tests that are using circuit()
without set_backend()
. These are also invoking the GlobalBackend
to execute the circuit and are even worse, because the GlobalBackend
will be whetever was set beforehand (by the last set_backend()
of a previous test). These may also be a bit harder to search for, because they can appear in various ways: circuit()
, circuit.execute()
, c()
, whatever_circuit_name()
, c(nshots=100)
(like the one fixed here), etc.. An easy way would be to remove Circuit.__call__
and Circuit.execute
methods and see which tests fail.
By the way, I think that this discussion is not very relevant to this PR and maybe should be converted to an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, I think that this discussion is not very relevant to this PR and maybe should be converted to an issue.
I agree about the issue. If you wish, feel free to open yourself, you're definitely the most expert.
An easy way would be to remove
Circuit.__call__
andCircuit.execute
methods and see which tests fail.
Another option is to keep Circuit.__call__
, but require a backend
argument to be passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree about the issue. If you wish, feel free to open yourself, you're definitely the most expert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of tests relying on a random state of the GlobalBackend
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of tests relying on a random state of the
GlobalBackend
?
These are the tests that execute circuits using the circuit.execute
interface, which invokes the GlobalBackend()
:
test_backends_global::test_circuit_execution
(sets to numpy)test_measurements::test_measurement_compiled_circuit
(does not set)test_models_circuit_execution::test_compiled_execute
(does not set)- Various tests in
test_models_qcnn
are using theGlobalBackend
but setting it beforehand to qibojit or numpy without reseting it after. test_models_variational::test_vqe
(does not set)test_models_variational::test_custom_loss
(does not set)
Co-authored-by: Stavros Efthymiou <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @BrunoLiegiBastonLiegi, looks good to me and seems to work after the latest fixes.
import qulacs # pylint: disable=import-error | ||
from qulacs import ( # pylint: disable=no-name-in-module, import-error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this exceptions because otherwise pylint fails for mac with py3.9 in the CI? If yes, I guess that's fine, otherwise I am not sure if they are needed given that CI is using the test dependencies which include qulacs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if I remember correctly pylint
was complaining in macos-python3.9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say this is definitely the best way you could do it. At least is confined within the qibo.backends.qulacs
module, though not restricted to the offending macos+py3.9 combo.
If done in the workflow, it would be positioned in a larger scope, though you may be able to ignore specifically that file for that combo.
@@ -13,6 +13,7 @@ | |||
|
|||
def test_classifier_circuit2(): | |||
""" """ | |||
set_backend("numpy") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the discussion above, this is also not very good practice because it will also set numpy as the GlobalBackend
for all upcoming tests. However, since everything should work with numpy, it will probably not cause any issue. Maybe we could just open an issue for the test discussion and postpone for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree, however the QCNN
model doesn't seem to support the specification of the execution backend and always uses the GlobalBackend
, I opened an issue about this already #1362. The tests were originally using the NumpyBackend
(the global one) and thus I set it explicitely here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one last tiny comment. Other than that, it is good for me.
Thanks @BrunoLiegiBastonLiegi!
@BrunoLiegiBastonLiegi whenever the CI passes, feel free to merge :) (just make sure before that your branch is up-to-date with |
@scarrazza could you withdraw your review? |
This adds
qulacs
as an optional backend.Checklist: