Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port latest parameters to 0.2 #205

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Port latest parameters to 0.2 #205

wants to merge 8 commits into from

Conversation

stavros11
Copy link
Member

Ports the latest parameters from the 0.1 branch to the 0.2 platform for qw11q. It also adds the B line in the platform, as we were missing that in 0.2. Now behavior should be equivalent to 0.1 for this chip:

@stavros11
Copy link
Member Author

@andrea-pasquale I ported the latest calibration and updates for qw11q (#207, #210) and qw5q_platinum (#206) to 0.2 version. I have not checked it yet, but in principle we could start testing 0.2 from here.

@stavros11 stavros11 changed the title Latest 0.1 parameters of qw11q Port latest parameters to 0.2 Jan 8, 2025
@andrea-pasquale
Copy link
Contributor

andrea-pasquale commented Jan 8, 2025

@andrea-pasquale I ported the latest calibration and updates for qw11q (#207, #210) and qw5q_platinum (#206) to 0.2 version. I have not checked it yet, but in principle we could start testing 0.2 from here.

Thanks @stavros11 for updating the platforms!
I can confirm that line D is working fine (it just needs recalibration), instead whenever I test a qubit of the line B I get this weird error message:

Error
[Qibo 0.2.12|INFO|2025-01-08 12:42:42]: Loading platform dummy
[Qibo 0.2.12|INFO|2025-01-08 12:42:42]: Loading platform dummy
2025-01-08 12:42:43,137 - qm - INFO     - Starting session: 7783547c-b83c-47d2-88d2-de994f1c1139
[Qibo 0.2.12|INFO|2025-01-08 12:42:43]: Loading platform qw11q
[Qibo 0.2.12|INFO|2025-01-08 12:42:43]: Loading platform qw11q
[Qibo 0.2.12|INFO|2025-01-08 12:42:43]: Loading platform qw11q
[Qibo 0.2.12|INFO|2025-01-08 12:42:43]: Loading platform qw11q
[Qibo 0.2.12|INFO|2025-01-08 12:42:43]: Loading platform qw11q
[Qibocal 0.1.1|WARNING|2025-01-08 12:42:43]: Deleting previous directory test_B1.
[Qibocal 0.1.1|INFO|2025-01-08 12:42:43]: Creating directory test_B1.
/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/qm/quantum_machines_manager.py:140: DeprecationWarning: QMM was opened with OctaveConfig. Please note that from QOP2.4.0 the octave devices are managed by the cluster setting in the QM-app. It is recommended to remove the OctaveConfig from the QMM instantiation.
  warnings.warn(
2025-01-08 12:42:48,372 - qm - INFO     - Performing health check
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con1. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con2. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con3. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con4. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con5. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con6. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con7. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con8. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,394 - qm - WARNING  - Health check warning: Inter-OPX connectivity issues in OPX: con9. Missing ports are: 12, 11, 10, 9. See QM-App for more info.
2025-01-08 12:42:48,395 - qm - INFO     - Health check passed
[Qibocal 0.1.1|INFO|2025-01-08 12:42:52]: Executing mode AUTOCALIBRATION on single_shot_classification.
[Qibo 0.2.12|INFO|2025-01-08 12:42:52]: Minimal execution time: 0.51
Connected to: Rohde&Schwarz SGS100A (serial:1416.0505k02/114167, firmware:4.2.76.0-4.30.046.295) in 0.15s
Connected to: Rohde&Schwarz SGS100A (serial:1416.0505k02/114164, firmware:4.2.76.0-4.30.046.295) in 0.16s
Traceback (most recent call last):
  File "/nfs/users/andrea.pasquale/qibocal/runcards/recal.py", line 152, in <module>
    main(targets=args.targets, platform_name=args.platform, output=args.output)
  File "/nfs/users/andrea.pasquale/qibocal/runcards/recal.py", line 66, in main
    classification_output = e.single_shot_classification(
  File "/nfs/users/andrea.pasquale/qibocal/src/qibocal/auto/execute.py", line 225, in wrapper
    return self.run_protocol(protocol, parameters=action, mode=mode)
  File "/nfs/users/andrea.pasquale/qibocal/src/qibocal/auto/execute.py", line 130, in run_protocol
    completed = task.run(platform=self.platform, targets=self.targets, mode=mode)
  File "/nfs/users/andrea.pasquale/qibocal/src/qibocal/auto/task.py", line 159, in run
    completed.data, completed.data_time = operation.acquisition(
  File "/nfs/users/andrea.pasquale/qibocal/src/qibocal/auto/operation.py", line 40, in wrapper
    out = func(*args, **kwds)
  File "/nfs/users/andrea.pasquale/qibocal/src/qibocal/protocols/classification.py", line 226, in _acquisition
    results.update(platform.execute([sequence], **options))
  File "/nfs/users/andrea.pasquale/qibolab/src/qibolab/_core/platform/platform.py", line 290, in execute
    results |= self._execute(b, options, configs, sweepers)
  File "/nfs/users/andrea.pasquale/qibolab/src/qibolab/_core/platform/platform.py", line 224, in _execute
    new_result = instrument.play(configs, sequences, options, sweepers)
  File "/nfs/users/andrea.pasquale/qibolab/src/qibolab/_core/instruments/qm/controller.py", line 522, in play
    result = self.execute_program(experiment)
  File "/nfs/users/andrea.pasquale/qibolab/src/qibolab/_core/instruments/qm/controller.py", line 454, in execute_program
    machine = self.manager.open_qm(asdict(self.config))
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/qm/quantum_machines_manager.py", line 330, in open_qm
    self._octave_manager.set_octaves_from_qua_config(loaded_config.v1_beta.octaves)
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/qm/octave/octave_manager.py", line 567, in set_octaves_from_qua_config
    octave.end_batch_mode()
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/octave_sdk/octave.py", line 1269, in end_batch_mode
    BatchSingleton().end_batch_mode()
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/octave_sdk/batch.py", line 31, in end_batch_mode
    callback()
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/octave_sdk/_octave_client.py", line 250, in _end_batch_callback
    self._send_update(list(BatchSingleton().get_cached_updates(self).values()))
  File "/nfs/users/andrea.pasquale/default/lib/python3.10/site-packages/octave_sdk/_octave_client.py", line 287, in _send_update
    raise Exception(f"Octave update failed: {response.error_message}")
Exception: Octave update failed: Can not update fast switch mode to DIRECT or INVERTED when the FPGA is not available.
srun: error: fahid: task 0: Exited with exit code 1

I was just running the following script on the 0.2 qibocal branch:

from qibocal.auto.execute import Executor
from qibocal.cli.report import report
with Executor.open(
        "myexec",
        path=output,
        platform="qw11q",
        targets=["B3"],
        update=False,
        force=True,
    ) as e:
    platform = e.platform
    classification_output = e.single_shot_classification(
            nshots=5000,
        )

    classification_output.update_platform(platform)
    report(e.path, e.history)

@stavros11
Copy link
Member Author

Thanks @stavros11 for updating the platforms! I can confirm that line D is working fine (it just needs recalibration), instead whenever I test a qubit of the line B I get this weird error message:

Thanks for testing @andrea-pasquale. I was just writing that I am getting the same error on line B. I have not tested line D, but it is even weirder that it works. I am investigating because line B is working okay with #210 using 0.1. I am also not sure what the error means.

Also, there is an issue with the CI that we should fix before merging this.

@stavros11
Copy link
Member Author

Thanks @stavros11 for updating the platforms! I can confirm that line D is working fine (it just needs recalibration), instead whenever I test a qubit of the line B I get this weird error message:

Error

It seems that this weird error appears when using the triggered output mode for Octave LOs. In 16cad23 I switched all LOs associated to line B to always_on and this fixes the issue for me. Here is the single shot report after this change: http://login.qrccluster.com:9000/MrVXLNiyT_6CDsR7EHSqBQ==/, which agrees with 0.1 results. Note that in 0.1 we are also using always_on.

However, given that triggered still works for line D (octave6), I believe there is something wrong with octave2 in particular which causes the issue for line B. I have reported this to QM and sent the logs. If we fix this, I will revert back to using triggered, otherwise we can keep using always_on, which is also the case for 0.1. This change should not have any serious effect on the results.

Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @stavros11, I confirm that everything is working fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants