Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add layer fidelity experiment #1322

Merged
merged 43 commits into from
May 1, 2024
Merged

Conversation

itoko
Copy link
Contributor

@itoko itoko commented Nov 17, 2023

Summary

Add a new experiment class to measure layer fidelity, which is a holistic benchmark to characterize the full quality of the devices at scale (https://arxiv.org/abs/2311.05933)

Example notebook: run_lf_qiskit_experiments_large.ipynb (Gist)

Experimental features:

  • Exceptionally circuits() method returns circuits on physical qubits (not virtual qubits as usual)
  • Add reason as an extra analysis result entry to tell users why the quality of the analysis was "bad"

Follow-up items:

  • Add API for customizing DD (e.g. register DD sequence generator by name and specify the name in experiment_options)
def dd_func1(delay_length, backend) ->  list[Instruction];

LayerFidelity.dd_functions = {
    "dd1": dd_func1,
    "dd2": dd_func2,
}

Features decided not to include:

  • full_sampling option (StandardRB has). LayerFidelity behaves as if setting always full_sampling==True to avoid correlation between sequences, which RB theory does not consider.
  • replicate_in_parallel option that allows to use a common direct RB sequence for all qubit pairs. It turned out that replicate_in_parallel==True may underestimate some types of errors, suggesting it should always be set to False.

Issues to be addressed separately

  • Poor interface for querying figures: No pointer to a relevant figure (or data used for fitting) is stored in AnalysisResult (i.e. users who find a bad fitting result in AnalysisResult cannot easily look at a figure relevant to the result)
    ==> For now, you can query a figure by its name; e.g. exp_data.figure("DirectRB_Q83_Q82")

Dependencies:

@itoko itoko force-pushed the layer-fidelity branch 2 times, most recently from 898ccc5 to 55d9917 Compare November 21, 2023 13:08
@itoko itoko changed the title WIP: Add layer fidelity experiment Add layer fidelity experiment Nov 28, 2023
@dcmckayibm
Copy link
Collaborator

Looks good, I confirm it works!

What is the syntax for pulling out the individual fidelities? It would be nice to have a function where after the analysis it gives the layer fidelity for a subset of the gates. I.e the function takes in the gates (or qubits) and it gives back the layer fidelity. If given the qubits it returns the product of all the gates with both qubits in the list and for gates with only one qubit in the list it takes the product of fid^(0.5).

@dcmckayibm
Copy link
Collaborator

Looks like df[(df.name == "ProcessFidelity") & (df.qubits==(59,60))].value

@dcmckayibm
Copy link
Collaborator

Need to check one of the prefactors in the layer fidelity formula...will do that tomorrow.

@dcmckayibm
Copy link
Collaborator

dcmckayibm commented Feb 4, 2024

Ok... I do see one pretty major issue, the Cliffords should be randomized over each 2Q set as well. I noticed that each pair of qubits in the layer has the same 1Q cliffords. (edit: I see this is an option that can be set... I would set it as default to randomize everything replicate_in_parallel=False)

@dcmckayibm
Copy link
Collaborator

Can you also check the EPLG formula, it should be 1-LF^(1/n2Q gates) not 1-LF^(1/Nqubits)

@coruscating coruscating added this to the Release 0.7 milestone Feb 8, 2024
@itoko itoko marked this pull request as ready for review February 14, 2024 10:30
@itoko
Copy link
Contributor Author

itoko commented Feb 14, 2024

I understand why replicate_in_parallel==True may miss some errors by example (see below). So I've dropped the option at e2ef0e2. Now LF experiment always generates circuit with different random 1q-Cliffords for different qubit pairs.

Suppose 4 qubits with linear connectivity 0-1-2-3 and layers [CX(1, 0), CX(2, 3)]. In this setting, ZZ errors (and XX and YY errors as well) on controls (1, 2) is likely to be underestimated. An IZZI error at a layer can be moved to the end (assuming ideal Clifford operations) and it turns to one of {XX, YY, ZZ}_controls (x) {II, XX, YY, ZZ}_targets randomly by the paired random Cliffords and CXs. Here, we ignore ZZ errors during single-qubit Cliffords for simplicity. Summing such interactions moved to the end for all layers, the weight of XX and YY of the total interaction should be the same on average. If XX and YY have the same weight, the first column of the resulting unitary is identical to |0000>, suggesting the noise tends not to be reflected to the survival probability.

value=lf,
quality=quality_lf,
)
eplg = 1 - (lf ** (1 / self.num_2q_gates))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've confirmed that EPLG = 1-LF^(1/n2Q gates).

Copy link
Collaborator

@coruscating coruscating left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @itoko, it looks great overall. I tried to run the notebook on dublin with a small chain and got the error Delays must be a multiple of 16 samples. Not sure if that's an error that needs to be addressed.



@lru_cache(maxsize=24 * 24)
def _product_1q_nums(first: Integral, second: Integral) -> Integral:
Copy link
Collaborator

@coruscating coruscating Apr 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about caching these numbers in a file? It seems like a lot of work to make circuits just to convert them to numbers.

Copy link
Contributor Author

@itoko itoko Apr 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed at 0c3bd30. The cache file size is 1.3kB so the overhead of loading file would be negligible.

releasenotes/notes/layer-fidelity-1e09dea9e5b69515.yaml Outdated Show resolved Hide resolved
Copy link
Collaborator

@coruscating coruscating left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@coruscating coruscating added this pull request to the merge queue May 1, 2024
Merged via the queue into qiskit-community:main with commit f352b3c May 1, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants