-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dimension Mismatch in torch_connector.py #716
Dimension Mismatch in torch_connector.py #716
Comments
Thanks for pointing this out @miles0428. Perhaps text2 = ''
char_limit = 26
for i in range(27):
text2 += chr(97 + i) # chr(97) == 'a'
if i >= char_limit:
raise RuntimeError would make the |
@miles0428 could you please provide the exact code you used to instantiate the |
Hi @edoaltamura, here is the code where you can reproduce the error. I filled in the skipped program in Quanv2d.py to make the error reproducible. import torch
import torch.nn as nn
import torch.nn.functional as F
import time
from typing import Union, List, Iterator
import qiskit as qk
from qiskit import QuantumCircuit
from qiskit_machine_learning.neural_networks import SamplerQNN
from qiskit_machine_learning.connectors import TorchConnector
# from torch_connector import TorchConnector
class Quanv2d(nn.Module):
'''
A quantum convolutional layer
args
input_channel: number of input channels
output_channel: number of output channels
num_qubits: number of qubits
num_weight: number of weights
kernel_size: size of the kernel
stride: stride of the kernel
'''
def __init__(self,
input_channel : int,
output_channel : int,
num_qubits : int,
num_weight : int,
kernel_size : int = 3,
stride : int = 1
):
super(Quanv2d, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
self.input_channel = input_channel
self.output_channel = output_channel
self.num_weight = num_weight
self.num_input = kernel_size * kernel_size * input_channel
self.num_qubits = num_qubits
self.qnn = TorchConnector(self.Sampler())
assert 2**num_qubits >= output_channel, '2**num_qubits must be greater than output_channel'
def build_circuit(self,
num_weights : int,
num_input : int,
num_qubits : int = 3
) -> tuple[QuantumCircuit, Iterator[qk.circuit.Parameter], Iterator[qk.circuit.Parameter]]:
'''
build the quantum circuit
param
num_weights: number of weights
num_input: number of inputs
num_qubits: number of qubits
return
qc: quantum circuit
weight_params: weight parameters
input_params: input parameters
'''
qc = QuantumCircuit(num_qubits)
weight_params = [qk.circuit.Parameter('w{}'.format(i)) for i in range(num_weights)]
input_params = [qk.circuit.Parameter('x{}'.format(i)) for i in range(num_input)]
#construct the quantum circuit with the parameters
for i in range(num_qubits):
qc.h(i)
for i in range(num_input):
qc.ry(input_params[i]*2*torch.pi, i%num_qubits)
for i in range(num_qubits - 1):
qc.cx(i, i + 1)
for i in range(num_weights):
qc.rx(weight_params[i]*2*torch.pi, i%num_qubits)
for i in range(num_qubits - 1):
qc.cx(i, i + 1)
return qc, weight_params, input_params
def Sampler(self) -> SamplerQNN:
'''
build the quantum circuit
param
num_weights: number of weights
num_input: number of inputs
num_qubits: number of qubits
return
qc: quantum circuit
'''
qc,weight_params,input_params = self.build_circuit(self.num_weight,self.num_input,3)
#use SamplerQNN to convert the quantum circuit to a PyTorch module
qnn = SamplerQNN(
circuit = qc,
weight_params = weight_params,
interpret=self.interpret,
input_params=input_params,
output_shape=self.output_channel,
)
return qnn
def interpret(self, X: Union[List[int],int]) -> Union[int,List[int]]:
'''
interpret the output of the quantum circuit using the modulo function
this function is used in SamplerQNN
args
X: output of the quantum circuit
return
the remainder of the output divided by the number of output channels
'''
return X % self.output_channel
def forward(self, X : torch.Tensor) -> torch.Tensor:
'''
forward function for the quantum convolutional layer
args
X: input tensor with shape (batch_size, input_channel, height, width)
return
X: output tensor with shape (batch_size, output_channel, height, width)
'''
height = len(range(0,X.shape[2]-self.kernel_size+1,self.stride))
width = len(range(0,X.shape[3]-self.kernel_size+1,self.stride))
output = torch.zeros((X.shape[0],self.output_channel,height,width))
X = F.unfold(X[:, :, :, :], kernel_size=self.kernel_size, stride=self.stride)
qnn_output = self.qnn(X.permute(2, 0, 1)).permute(1, 2, 0)
qnn_output = torch.reshape(qnn_output,shape=(X.shape[0],self.output_channel,height,width))
output += qnn_output
return output
if __name__ == '__main__':
# Define the model
model = Quanv2d(3, 1, 3, 3,stride=1)
X = torch.rand((2,3,6,6))
X.requires_grad = True
X1 = model.forward(X)
X1 = torch.sum(X1)
#do some backward test
X1.backward() |
Thanks @miles0428, the last example was very useful. I've allowed the |
* Update README.md * Generalize the Einstein summation signature * Add reno * Update Copyright * Rename and add test * Update Copyright * Add docstring for `test_get_einsum_signature` * Correct spelling * Disable spellcheck for comments * Add `docstring` in pylint dict * Delete example in docstring * Add Einstein in pylint dict * Add full use case in einsum dict * Spelling and type ignore * Spelling and type ignore * Spelling and type ignore * Spelling and type ignore * Spelling and type ignore * Remove for loop in einsum function and remove Literal arguments (1/2) * Remove for loop in einsum function and remove Literal arguments (1/2) * Remove for loop in einsum function and remove Literal arguments (2/2) * Update RuntimeError msg * Update RuntimeError msg - line too long * Trigger CI --------- Co-authored-by: FrancescaSchiav <[email protected]> Co-authored-by: M. Emre Sahin <[email protected]>
Environment
What is happening?
I am attempting to build a quantum version of a convolutional layer with Qiskit and PyTorch, but I encountered an error related to the Einstein summation method. I believe this issue arises because the dimensions do not match when I execute
loss.backward()
.The problem specifically stems from the fact that the
torch_connector.py
file utilizesweights_grad = torch.einsum("ij,ijk->k", grad_output.detach().cpu(), weights_grad)
for the 3D case. However, in my implementation, bothgrad_output.detach().cpu()
andweights_grad
are in the 4D case. To resolve this, I modified the expression from"ij,ijk->k"
to"ijl,ijlk->k"
, and this corrected the problem.How can we reproduce the issue?
Quantum Convolution Layer
Torch Model
Error code
What should happen?
"ij,ijk->k"
should match the dimension of these two parameter(grad_output.detach().cpu()
andweights_grad
).Any suggestions?
maybe add some Discriminant to get the dimension of Discriminant
for example
The text was updated successfully, but these errors were encountered: