Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix QNN for input and weights ordering #728

Merged
merged 7 commits into from
Jan 8, 2024

Conversation

woodsp-ibm
Copy link
Member

@woodsp-ibm woodsp-ibm commented Dec 13, 2023

Summary

Fixes #727

Reassigns QNN circuit parameters so the order of circuit.parameters, which is alphanumerically ordered when returned, comes out as as desired for the primitive and the data (parameters) as ordered there which is inputs followed by weights in the given array.

Details and comments

I added a method to reassign parameters in the circuit so it's a new circuit copy from that supplied, but whose parameters are in the order needed by the data passed to the primitive when it runs. This I put in self._circuit so that internally its using this circuit now when running etc. As there are parameter and circuit getters, I left these all returning what was passed in originally ( as before) so the original circuit is assigned to a different named instance var which is returned by the getter.

So what is run is equivalent but not the circuit as that passed. An alternative would be to alter the data order so it corresponds to the order of the parameters. This seemed less desirable on initial discussion since it changes the data.

This PR still needs a release note but I figured lets settle that this is how we want to proceed before I do that. Not that I imagine it would change the text thereof though. Update: I added one anyway...

  • Add release note
  • Address change in tutorial 11_quantum_convolutional_neural_networks (see comment below in this page)

Fixes #678

In updating the tutorial with a new initial point I saw the issue described in #678 where "c1" was not a latin c from the overall parameter order (indeed searching for c in the text did not locate it) So this addresses that issue too.
As part of the update I also set a random_state to the test_train_split so the result in the notebook in the same each time its run. Otherwise the split is different each time which made the results different each time.

This is one of the tutorials, per issue #725, where the circuit.draw("mpl") has deprectation messages so since this is being modified I corrected that as well, just for this tutorial, and used the clifford style which keeps the same look as before

@coveralls
Copy link

coveralls commented Dec 13, 2023

Pull Request Test Coverage Report for Build 7451895217

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.09%) to 92.567%

Totals Coverage Status
Change from base Build 7451842530: -0.09%
Covered Lines: 1868
Relevant Lines: 2018

💛 - Coveralls

@woodsp-ibm
Copy link
Member Author

A note:

As the standard feature map is mostly used in tests and tutorials with "x" as the name and the standard ansatzes used with greek theta the order was naturally as needed so no change in tests was needed and the tutorials look the same as before etc....

But with the exception of 11_quantum_convolutional_neural_networks whose behavior is now different and may need to be reviewed. In it it says

Since model training may take a long time we have already pre-trained the model for some iterations and saved the pre-trained weights. We’ll continue training from that point by setting initial_point to a vector of pre-trained weights.

The plot no longer looks flat like it did so most likely the initial point needs reviewing and the accuracy is lower now - most likely because maxiter is limited to 200 and it no longer reaches a more optimal solution

image

@woodsp-ibm woodsp-ibm added the stable backport potential The bug might be minimal and/or import enough to be port to stable label Dec 14, 2023
Copy link
Collaborator

@adekusar-drl adekusar-drl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Looks good to me.

@mergify mergify bot merged commit e9a7540 into qiskit-community:main Jan 8, 2024
17 checks passed
mergify bot pushed a commit that referenced this pull request Jan 8, 2024
* Fix QNN for input and weights ordering

* Black

* Lint

* Update QCNN tutorial

* Add reno

* Fix draw style per #725

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
(cherry picked from commit e9a7540)
@woodsp-ibm woodsp-ibm deleted the fix_qnn_params branch January 8, 2024 19:55
woodsp-ibm added a commit to woodsp-ibm/qiskit-machine-learning that referenced this pull request Jan 8, 2024
woodsp-ibm added a commit that referenced this pull request Jan 8, 2024
* Fix QNN for input and weights ordering

* Black

* Lint

* Update QCNN tutorial

* Add reno

* Fix draw style per #725

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
(cherry picked from commit e9a7540)

Co-authored-by: Steve Wood <[email protected]>
mergify bot pushed a commit that referenced this pull request Jan 8, 2024
mergify bot pushed a commit that referenced this pull request Jan 8, 2024
mergify bot added a commit that referenced this pull request Jan 8, 2024
(cherry picked from commit 1203346)

Co-authored-by: Steve Wood <[email protected]>
oscar-wallis pushed a commit that referenced this pull request Feb 16, 2024
* Fix QNN for input and weights ordering

* Black

* Lint

* Update QCNN tutorial

* Add reno

* Fix draw style per #725

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
oscar-wallis pushed a commit that referenced this pull request Feb 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge stable backport potential The bug might be minimal and/or import enough to be port to stable
Projects
None yet
3 participants