-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use real samples on QPU device #32
Conversation
Codecov Report
@@ Coverage Diff @@
## master #32 +/- ##
==========================================
- Coverage 87.17% 78.13% -9.05%
==========================================
Files 5 5
Lines 273 279 +6
==========================================
- Hits 238 218 -20
- Misses 35 61 +26
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dabacon! If I recall, this is something we debated initially while putting the plugin together, as we weren't sure how to extract samples from the API.
Quick question; from looking at the change, the new logic is that it uses the counts to 'repeat' the measured computational basis states in order from 0 to N-1, and then converts them into the equivalent binary representation.
Would the inherent ordering of the returned samples be an issue? I can't formulate a strong reason why this would be the case, but was just curious if it makes sense to do an in-place np.random.shuffle
before returning them.
Some apis do return the results where the order is the order the experiments were run, but IonQ's api doesn't return this order information. For such an API you can analyze things for drift over time. Fun stuff :) The one reason I can think about for doing a shuffle is that this can handle bugs where someone slices the results expected that to be equivalent to asking for a sub-sample. The downside is that in this case it introduces a dependency on the numpy's random number generator, which is a bit strange. I guess one solution is that we could set the seed explicitly for this. But I guess the question is whether you think there will be code that does this slicing. If so I think the random shuffle is the right way to go. |
Good point, I think this is an important enough reason to do the shuffling. In fact, there are cases where I realize now this could be required. One example is any case where you are distributing expectation values across sub-batches of shots from a single job --- e.g., the extreme case of single-shot gradient variance is required to do shot-adaptive optimization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dabacon! Looks good.
We should add a (mocked) test of this newly written method to test_device.py
before merging
Shuffle is in place! Fixed this and the wrong period placement. |
Thanks @dabacon! |
In the case of the actual QPU, instead of resample from the returned results, one should just use the actual samples.
This is a bit convoluted because the api currently returns probabilities instead of counts, so should be upgraded if/when counts are exposed.