Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QulacsBackend
#1272QulacsBackend
#1272Changes from all commits
02325c6
20e03a9
41f1117
d6a51cc
e8e3558
a16e40e
32a4ae9
eab30fc
2bc412b
c5e1b39
4f0aacb
bb7417b
9b9e1a7
8ffa7f7
1a642ef
f8abd22
d2a9a46
02fb9f2
5ecc5d8
b2f3a91
f873e51
df47024
530ceb3
c9e6ee1
791e11a
db03b60
0fafeb1
c467ee3
04b8671
b89d382
5fe371a
471dd44
494ef5d
ec1d8bb
e33471f
5c11cb6
a7bb2a4
885c680
7f51cda
89b0997
3c8cccf
a579c1f
b7c32ca
9a7fdaa
94d5c89
f29ae65
27dac43
a59041d
6033640
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Large diffs are not rendered by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this exceptions because otherwise pylint fails for mac with py3.9 in the CI? If yes, I guess that's fine, otherwise I am not sure if they are needed given that CI is using the test dependencies which include qulacs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if I remember correctly
pylint
was complaining inmacos-python3.9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say this is definitely the best way you could do it. At least is confined within the
qibo.backends.qulacs
module, though not restricted to the offending macos+py3.9 combo.If done in the workflow, it would be positioned in a larger scope, though you may be able to ignore specifically that file for that combo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like a lot the explicit backend usage, but why do you need it for this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because otherwise only the
GlobalBackend
is tested here, if I am not mistaken. Anyway, this was failing with theQulacsBackend
, but was not supposed to be testing it, even though it did due to theGlobalBackend
usage.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that @stavros11 set these tests to run with all the available backends, through a mechanism implemented in
conftest.py
qibo/tests/conftest.py
Lines 63 to 87 in c68e58b
It's true that is using the
GlobalBackend
, but it should be also setting the backend registered there to all the possible backends.So, the
backend.execute_circuit(c, *args)
should be fully equivalent to have thebackend
fixture callingset_backend()
, and usec(*args)
.However, that's not the case, but I'm worried that all tests should be updated to be consistent, unless they are calling
set_backend()
within the test.One way or the other, I'd like if there were a unique way of doing things. And in this PR, I'd use the current one, even if it's suboptimal.
If there is truly a better way (which most likely it's true, and it will be the one you're using, or, slightly worse, changing the
backend
fixture to set the backend as well), we should open a PR right after this one to implement it consistently over all the tests that arebackend
-dependent.What do you think? @BrunoLiegiBastonLiegi @stavros11
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I remember correctly, the
backend
fixture does not set theGlobalBackend
. Thereforein a test that uses the
backend
fixture is not equivalent toThe second will indeed test all different available backends, while the first will only test whatever
GlobalBackend
was set at that time. Actually, it will probably repeat the test using the sameGlobalBackend
multiple times, because the fixture will still loop over all available backends.The
GlobalBackend
in this case is not very well defined, because if a previous test switches it (usingset_backend
) then this value will be used in all upcoming tests, until it is switched by another test. This is usually bad (as is the idea ofGlobalBackend
, and more generally global variables), because tests that are supposed to be independent actually depend on what happened in previous tests.I think the general idea is to avoid using the
GlobalBackend
in tests, except for very few tests that exist for testing theGlobalBackend
functionality itself. All tests of other features should use thebackend.execute_circuit
approach. If a test needs to use theGlobalBackend
, this should ideally be done as:We could also wrap this to another (or the same?) fixture, but I don't think that exists right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it doesn't, indeed I wrote it (in a bit convoluted way) in my previous comment (that was mainly hypothetical, eventually negating the hypothesis...).
I actually searched, and only a few tests are actually using
set_backend()
, most of them just to test it.Tests using
set_backend
Many more are already using
backend.execute_circuit()
, against my expectation.So, fine, this PR is aligning to the majority, and that's good. Sorry for doubting it.
We should just fix the other tests (like
test_models_qcnn.py
) or remove the import where is not even used (i.e.test_models_dbi.py
, ...).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, but there are also tests that are using
circuit()
withoutset_backend()
. These are also invoking theGlobalBackend
to execute the circuit and are even worse, because theGlobalBackend
will be whetever was set beforehand (by the lastset_backend()
of a previous test). These may also be a bit harder to search for, because they can appear in various ways:circuit()
,circuit.execute()
,c()
,whatever_circuit_name()
,c(nshots=100)
(like the one fixed here), etc.. An easy way would be to removeCircuit.__call__
andCircuit.execute
methods and see which tests fail.By the way, I think that this discussion is not very relevant to this PR and maybe should be converted to an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree about the issue. If you wish, feel free to open yourself, you're definitely the most expert.
Another option is to keep
Circuit.__call__
, but require abackend
argument to be passed.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#1374
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of tests relying on a random state of the
GlobalBackend
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the tests that execute circuits using the
circuit.execute
interface, which invokes theGlobalBackend()
:test_backends_global::test_circuit_execution
(sets to numpy)test_measurements::test_measurement_compiled_circuit
(does not set)test_models_circuit_execution::test_compiled_execute
(does not set)test_models_qcnn
are using theGlobalBackend
but setting it beforehand to qibojit or numpy without reseting it after.test_models_variational::test_vqe
(does not set)test_models_variational::test_custom_loss
(does not set)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the discussion above, this is also not very good practice because it will also set numpy as the
GlobalBackend
for all upcoming tests. However, since everything should work with numpy, it will probably not cause any issue. Maybe we could just open an issue for the test discussion and postpone for now.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree, however the
QCNN
model doesn't seem to support the specification of the execution backend and always uses theGlobalBackend
, I opened an issue about this already #1362. The tests were originally using theNumpyBackend
(the global one) and thus I set it explicitely here.