Hello. I’m @TM_MEME
I checked the update of pennylane 0.31 and tried to implement the Native support for parameter broadcasting functionality using TorchLayer.
Based on the documentation, it will perform batch quantum circuits on tensors of the form (batch_size, n_qubits), which may improve the processing speed of the quantum convolution layer class. Below is the abbreviated code.
# defining the circuit
def circuit(inputs, weights):
# define the length of the inputs
N_inputs = len(inputs)
# apply Hadamard gate to all wires
for i in range(N_inputs):
qml.Hadamard(wires=i)
qml.Barrier(range(N_inputs))
# Encoding the input
for i in range(N_inputs):
qml.RY(np.pi * inputs[i], wires=i)
qml.Barrier(range(N_inputs))
z_observables = qml.PauliZ(0)
for i in range(1, N_inputs):
z_observables @= qml.PauliZ(i)
# return the expectation value
result = qml.expval(z_observables)
return result
class QConv1D_Model2(torch.nn.Module):
def __init__(self,
out_channels,
kernel_size,
weight_shapes,
diff_method,
grad_on_execution,
mode,
dev_name,
dilation=1,
stride=1
):
super(QConv1D_Model2, self).__init__()
self.out_channels = out_channels
self.kernel_size = kernel_size
self.dilation = dilation
self.stride = stride
self.diff_method = diff_method
self.grad_on_execution = grad_on_execution
self.mode = mode
self.dev_name = dev_name
self.dev = qml.device(self.dev_name, wires=self.kernel_size, batch_obs=False)
self.weight_shapes = weight_shapes
self.circuit = qml.QNode(circuit,
self.dev,
interface='torch',
diff_method=self.diff_method,
grad_on_execution=self.grad_on_execution,
mode=self.mode
)
self.singlelayer = qml.qnn.TorchLayer(self.circuit, self.weight_shapes)
def forward(self, x):
device = x.device
output_width = (input_width - (self.kernel_size - 1) * self.dilation - 1) // self.stride + 1
output = torch.zeros(batch_size, self.out_channels, output_width, device=device)
print(x.shape, x, "x")
unfolded_input = x.unfold(2, self.kernel_size * self.dilation, self.stride)
print(unfolded_input.shape, unfolded_input, " unfolded_input")
for b in range(batch_size):
for o in range(self.out_channels):
for c in range(in_channels):
inputs = unfolded_input[b, c, :, :]
print(inputs.shape, inputs, "inputs")
qconv_output = self.singlelayer(inputs)
qconv_output = qconv_output.reshape(output_width)
output[b, o, :] += qconv_output
return output
# Initialize the quantum convolutional layer
out_channels = 1
kernel_size = 4
weight_shapes = {"weights": (2, kernel_size)}
diff_method = "adjoint"
grad_on_execution = True
mode = "backward"
dev_name = "lightning.qubit"
stride = 1
dilation = 1
model = QConv1D_Model2(out_channels, kernel_size, weight_shapes, diff_method, grad_on_execution, mode, dev_name,
dilation, stride)
# Create a dummy input tensor
batch_size = 1
in_channels = 4
input_width = 8
x = torch.rand((batch_size, in_channels, input_width))
# Test the forward method
output = model.forward(x)
The forward method is processed as follows.
- The input tensor [1, 4, 8] is unfolded to [1, 4, 5, 4].
- Of the [1, 4, 5, 4], [5, 4] is batch processed by the quantum circuit. That is, (batch_size, n_qubits)=(5, 4) according to the rules of the document. Of course, the number of qubits (kernel_size) is set to 4.
- Loop through this [5, 4] partial batch process.
The logic is not fully complete, but at qconv_output = self.singlelayer(inputs)
we get the following error
Traceback (most recent call last):
File "/home/username/.local/lib/python3.8/site-packages/pennylane/wires.py", line 239, in index
return self._labels.index(wire)
ValueError: tuple.index(x): x not in tuple
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/large/home/username/project/FCNA/QConv1d_model2_broadcasting.py", line 177, in <module>
output = model.forward(x)
File "/large/home/username/project/FCNA/QConv1d_model2_broadcasting.py", line 148, in forward
qconv_output = self.singlelayer(inputs)
File "/home/username/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/qnn/torch.py", line 408, in forward
results = self._evaluate_qnode(inputs)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/qnn/torch.py", line 429, in _evaluate_qnode
res = self.qnode(**kwargs)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/qnode.py", line 950, in __call__
res = qml.execute(
File "/home/username/.local/lib/python3.8/site-packages/pennylane/interfaces/execution.py", line 642, in execute
res = _execute(
File "/home/username/.local/lib/python3.8/site-packages/pennylane/interfaces/torch.py", line 498, in execute
return ExecuteTapes.apply(kwargs, *parameters)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/interfaces/torch.py", line 262, in new_apply
flat_out = orig_apply(out_struct_holder, *inp)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/interfaces/torch.py", line 266, in new_forward
out = orig_fw(ctx, *inp)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/interfaces/torch.py", line 343, in forward
res, ctx.jacs = ctx.execute_fn(unwrapped_tapes, **ctx.gradient_kwargs)
File "/usr/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/_device.py", line 510, in execute_and_gradients
res.append(self.batch_execute([circuit])[0])
File "/home/username/.local/lib/python3.8/site-packages/pennylane/_qubit_device.py", line 603, in batch_execute
res = self.execute(circuit)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/_qubit_device.py", line 320, in execute
self.apply(circuit.operations, rotations=self._get_diagonalizing_gates(circuit), **kwargs)
File "/home/username/.local/lib/python3.8/site-packages/pennylane_lightning/lightning_qubit.py", line 435, in apply
self._pre_rotated_state = self.apply_lightning(self._state, operations)
File "/home/username/.local/lib/python3.8/site-packages/pennylane_lightning/lightning_qubit.py", line 401, in apply_lightning
wires = self.wires.indices(o.wires)
File "/home/username/.local/lib/python3.8/site-packages/pennylane/wires.py", line 265, in indices
return [self.index(w) for w in wires]
File "/home/username/.local/lib/python3.8/site-packages/pennylane/wires.py", line 265, in <listcomp>
return [self.index(w) for w in wires]
File "/home/username/.local/lib/python3.8/site-packages/pennylane/wires.py", line 241, in index
raise WireError(f"Wire with label {wire} not found in {self}.") from e
pennylane.wires.WireError: Wire with label 4 not found in <Wires = [0, 1, 2, 3]>.
The print output just before qconv_output = self.singlelayer(inputs)
is as follows.
torch.Size([1, 4, 8]) tensor([[[0.7579, 0.8861, 0.9163, 0.4564, 0.1829, 0.2591, 0.8855, 0.1303],
[0.6727, 0.4148, 0.2656, 0.3196, 0.4342, 0.9709, 0.5975, 0.5479],
[0.4503, 0.4824, 0.8944, 0.8432, 0.5346, 0.0663, 0.3357, 0.6092],
[0.4227, 0.1492, 0.8288, 0.2094, 0.9195, 0.2720, 0.6479, 0.6898]]]) x
torch.Size([1, 4, 5, 4]) tensor([[[[0.7579, 0.8861, 0.9163, 0.4564],
[0.8861, 0.9163, 0.4564, 0.1829],
[0.9163, 0.4564, 0.1829, 0.2591],
[0.4564, 0.1829, 0.2591, 0.8855],
[0.1829, 0.2591, 0.8855, 0.1303]],
[[0.6727, 0.4148, 0.2656, 0.3196],
[0.4148, 0.2656, 0.3196, 0.4342],
[0.2656, 0.3196, 0.4342, 0.9709],
[0.3196, 0.4342, 0.9709, 0.5975],
[0.4342, 0.9709, 0.5975, 0.5479]],
[[0.4503, 0.4824, 0.8944, 0.8432],
[0.4824, 0.8944, 0.8432, 0.5346],
[0.8944, 0.8432, 0.5346, 0.0663],
[0.8432, 0.5346, 0.0663, 0.3357],
[0.5346, 0.0663, 0.3357, 0.6092]],
[[0.4227, 0.1492, 0.8288, 0.2094],
[0.1492, 0.8288, 0.2094, 0.9195],
[0.8288, 0.2094, 0.9195, 0.2720],
[0.2094, 0.9195, 0.2720, 0.6479],
[0.9195, 0.2720, 0.6479, 0.6898]]]]) unfolded_input
torch.Size([5, 4]) tensor([[0.7579, 0.8861, 0.9163, 0.4564],
[0.8861, 0.9163, 0.4564, 0.1829],
[0.9163, 0.4564, 0.1829, 0.2591],
[0.4564, 0.1829, 0.2591, 0.8855],
[0.1829, 0.2591, 0.8855, 0.1303]]) inputs
It appears that the quantum circuit is referring to the vertical series of length 5 as input, even though it wants to use the horizontal series of length 4 as input for the quantum circuit among [5, 4].
As another example, for reference, when I run the following simple sample code, it works without error.
I compared the execution time between the above simple sample code and sequential execution with a for loop.
I confirmed that the processing speedup was achieved.
n_qubits = 10
dev = qml.device("lightning.qubit", wires=n_qubits)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(n_qubits))
qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
batch_size = 400
inputs = torch.rand((batch_size, n_qubits))
print("Batch inputs:", inputs, inputs.shape)
start = time.time()
outputs = qlayer(inputs)
end = time.time()
print("Batch execution time:", end - start)
print("Batch outputs:", outputs, outputs.shape)
So I have three questions.
- What is the essential problem in this issue? The difference between the simple sample code and the quantum convolution code is that TorchLayer is defined and used in the parent model through class inheritance. If there are any basic problems with the code, please let me know.
- It seems to me that detailed instructions on usage regarding Native support for parameter broadcasting have not been published yet. Please support future documentation.
- What were your expectations and objectives in implementing this broadcast function?
I understand that the implementation of the broadcast feature allows matrix-to-matrix execution instead of vector-to-matrix execution. Is this understanding correct?
I am assuming that this feature will allow us to execute quantum circuits at once, whereas up until now we could only execute quantum circuits sequentially in a for loop. Similar question have been asked before.
Thanks.