-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add layer fidelity experiment #1322
Conversation
898ccc5
to
55d9917
Compare
168a3b7
to
2846b0f
Compare
83328f7
to
3acc38a
Compare
Looks good, I confirm it works! What is the syntax for pulling out the individual fidelities? It would be nice to have a function where after the analysis it gives the layer fidelity for a subset of the gates. I.e the function takes in the gates (or qubits) and it gives back the layer fidelity. If given the qubits it returns the product of all the gates with both qubits in the list and for gates with only one qubit in the list it takes the product of fid^(0.5). |
Looks like |
Need to check one of the prefactors in the layer fidelity formula...will do that tomorrow. |
Ok... I do see one pretty major issue, the Cliffords should be randomized over each 2Q set as well. I noticed that each pair of qubits in the layer has the same 1Q cliffords. (edit: I see this is an option that can be set... I would set it as default to randomize everything |
Can you also check the EPLG formula, it should be 1-LF^(1/n2Q gates) not 1-LF^(1/Nqubits) |
9e7bfac
to
81c95dd
Compare
I understand why Suppose 4 qubits with linear connectivity 0-1-2-3 and layers [CX(1, 0), CX(2, 3)]. In this setting, ZZ errors (and XX and YY errors as well) on controls (1, 2) is likely to be underestimated. An IZZI error at a layer can be moved to the end (assuming ideal Clifford operations) and it turns to one of {XX, YY, ZZ}_controls (x) {II, XX, YY, ZZ}_targets randomly by the paired random Cliffords and CXs. Here, we ignore ZZ errors during single-qubit Cliffords for simplicity. Summing such interactions moved to the end for all layers, the weight of XX and YY of the total interaction should be the same on average. If XX and YY have the same weight, the first column of the resulting unitary is identical to |0000>, suggesting the noise tends not to be reflected to the survival probability. |
value=lf, | ||
quality=quality_lf, | ||
) | ||
eplg = 1 - (lf ** (1 / self.num_2q_gates)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've confirmed that EPLG = 1-LF^(1/n2Q gates).
…ub-results does not cause incorrect parent analysis
f954db1
to
2702fcd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @itoko, it looks great overall. I tried to run the notebook on dublin with a small chain and got the error Delays must be a multiple of 16 samples
. Not sure if that's an error that needs to be addressed.
qiskit_experiments/library/randomized_benchmarking/layer_fidelity.py
Outdated
Show resolved
Hide resolved
|
||
|
||
@lru_cache(maxsize=24 * 24) | ||
def _product_1q_nums(first: Integral, second: Integral) -> Integral: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about caching these numbers in a file? It seems like a lot of work to make circuits just to convert them to numbers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed at 0c3bd30. The cache file size is 1.3kB so the overhead of loading file would be negligible.
qiskit_experiments/library/randomized_benchmarking/layer_fidelity.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you!
Summary
Add a new experiment class to measure layer fidelity, which is a holistic benchmark to characterize the full quality of the devices at scale (https://arxiv.org/abs/2311.05933)
Example notebook: run_lf_qiskit_experiments_large.ipynb (Gist)
Experimental features:
circuits()
method returns circuits on physical qubits (not virtual qubits as usual)reason
as an extra analysis result entry to tell users why thequality
of the analysis was "bad"Follow-up items:
Features decided not to include:
full_sampling
option (StandardRB
has).LayerFidelity
behaves as if setting alwaysfull_sampling==True
to avoid correlation between sequences, which RB theory does not consider.replicate_in_parallel
option that allows to use a common direct RB sequence for all qubit pairs. It turned out thatreplicate_in_parallel==True
may underestimate some types of errors, suggesting it should always be set toFalse
.Issues to be addressed separately
AnalysisResult
(i.e. users who find a bad fitting result inAnalysisResult
cannot easily look at a figure relevant to the result)==> For now, you can query a figure by its name; e.g.
exp_data.figure("DirectRB_Q83_Q82")
Dependencies: