Skip to content

Qiskit Machine Learning 0.6.0

Compare
Choose a tag to compare
@adekusar-drl adekusar-drl released this 27 Mar 20:48
· 122 commits to main since this release
6b3c65a

Changelog

New Features

  • Allow callable as an optimizer in NeuralNetworkClassifier, VQC, NeuralNetworkRegressor, VQR, as well as in QuantumKernelTrainer.

    Now, the optimizer can either be one of Qiskit’s optimizers, such as SPSA or a callable with the following signature:

      from qiskit.algorithms.optimizers import OptimizerResult
  
      def my_optimizer(fun, x0, jac=None, bounds=None) -> OptimizerResult:
          # Args:
          #     fun (callable): the function to minimize
          #     x0 (np.ndarray): the initial point for the optimization
          #     jac (callable, optional): the gradient of the objective function
          #     bounds (list, optional): a list of tuples specifying the parameter bounds
          result = OptimizerResult()
          result.x = # optimal parameters
          result.fun = # optimal function value
          return result

      The above signature also allows to directly pass any SciPy minimizer, for instance as

      from functools import partial
      from scipy.optimize import minimize
      optimizer = partial(minimize, method="L-BFGS-B")
  • Added a new FidelityStatevectorKernel class that is optimized to use only statevector-implemented feature maps. Therefore, computational complexity is reduced from $O(N^2)$ to $O(N)$.

    Computed statevector arrays are also cached to further increase efficiency. This cache is cleared when the evaluate method is called, unless auto_clear_cache is False. The cache is unbounded by default, but its size can be set by the user, i.e., limited to the number of samples in the worst case.

    By default the Terra reference Statevector is used, however, the type can be specified via the statevector_type argument.

    Shot noise emulation can also be added. If shots is None, the exact fidelity is used. Otherwise, the mean is taken of samples drawn from a binomial distribution with probability equal to the exact fidelity.

    With the addition of shot noise, the kernel matrix may no longer be positive semi-definite (PSD). With enforce_psd set to True this condition is enforced.

    An example of using this class is as follows:

    from sklearn.datasets import make_blobs
    from sklearn.svm import SVC

    from qiskit.circuit.library import ZZFeatureMap
    from qiskit.quantum_info import Statevector

    from qiskit_machine_learning.kernels import FidelityStatevectorKernel

    # generate a simple dataset
    features, labels = make_blobs(
        n_samples=20, centers=2, center_box=(-1, 1), cluster_std=0.1
    )

    feature_map = ZZFeatureMap(feature_dimension=2, reps=2)
    statevector_type = Statevector

    kernel = FidelityStatevectorKernel(
        feature_map=feature_map,
        statevector_type=Statevector,
        cache_size=len(labels),
        auto_clear_cache=True,
        shots=1000,
        enforce_psd=True,
    )
    svc = SVC(kernel=kernel.evaluate)
    svc.fit(features, labels)
  • The PyTorch connector TorchConnector now fully supports sparse output in both forward and backward passes. To enable sparse support, first of all, the underlying quantum neural network must be sparse. In this case, if the sparse property of the connector itself is not set, then the connector inherits sparsity from the networks. If the connector is set to be sparse, but the network is not, an exception will be raised. Also you may set the connector to be dense if the network is sparse.

    This snippet illustrates how to create a sparse instance of the connector.

    import torch
    from qiskit import QuantumCircuit
    from qiskit.circuit.library import ZFeatureMap, RealAmplitudes

    from qiskit_machine_learning.connectors import TorchConnector
    from qiskit_machine_learning.neural_networks import SamplerQNN

    num_qubits = 2
    fmap = ZFeatureMap(num_qubits, reps=1)
    ansatz = RealAmplitudes(num_qubits, reps=1)
    qc = QuantumCircuit(num_qubits)
    qc.compose(fmap, inplace=True)
    qc.compose(ansatz, inplace=True)

    qnn = SamplerQNN(
        circuit=qc,
        input_params=fmap.parameters,
        weight_params=ansatz.parameters,
        sparse=True,
    )

    connector = TorchConnector(qnn)

    output = connector(torch.tensor([[1., 2.]]))
    print(output)

    loss = torch.sparse.sum(output)
    loss.backward()

    grad = connector.weight.grad
    print(grad)

      In hybrid setup, where a PyTorch-based neural network has classical and quantum layers, sparse operations should not be mixed with dense ones, otherwise exceptions may be thrown by PyTorch.

      Sparse support works on python 3.8+.

Upgrade Notes

  • The previously deprecated CrossEntropySigmoidLoss loss function has been removed.
  • The previously deprecated datasets have been removed: breast_cancer, digits, gaussian, iris, wine.
  • Positional arguments in QSVC and QSVR were deprecated as of version 0.3. Support of the positional arguments was completely removed in this version, please replace them with corresponding keyword arguments.

Bug Fixes

  • SamplerQNN can now correctly handle quantum circuits without both input parameters and weights. If such a circuit is passed to the QNN then this circuit executed once in the forward pass and backward returns None for both gradients.