Hello! I’m trying to develop a quantum neural network using PyTorch and Pennylane. I am messing around with state preparations algorithms and currently I am testing this encoding algorithm:
def Ry_EncodingFormula(self, K: float, samples: Tensor):
scalar = ((pi/2)/K)
return scalar * samples
def statePreparation(self, inputs: Tensor):
for i in self.positionalWires:
qml.Hadamard(wires = i)
encodingThetas = self.Ry_EncodingFormula(K = float(2**self.nBitEncoding) - 1, samples = inputs)
for i in range(encodingThetas.shape[1]):
currentState = str(binary_repr(i, width = len(self.positionalWires))).split()
XControlQubits = [index for index, digit in enumerate(currentState) if digit == 0]
for x in XControlQubits:
qml.PauliX(wires = x)
for controller, _ in enumerate(range(len(self.positionalWires))):
if controller == len(self.positionalWires) - 1:
qml.CRY(encodingThetas[:, i], wires = [self.positionalWires[controller], self.colorEncodingWire])
else:
qml.CRY(encodingThetas[:, i], wires = [self.positionalWires[controller], self.colorEncodingWire])
qml.CNOT(wires = [self.positionalWires[controller], self.positionalWires[controller+1]])
qml.CRY(-encodingThetas[:, i], wires = [self.positionalWires[controller+1], self.colorEncodingWire])
qml.CNOT(wires = [self.positionalWires[controller], self.positionalWires[controller+1]])
The functions are methods of a “wrapper” class.
Basically, I want to encode the color and positional information of each of the pixels in a grayscale image using an appropriate number of positional wires and a color wire. Each pixel is encoded in a rotation using the formula: \frac{\pi}{2k} \times g_k, where g_k is the k-th pixel in the image. Once rotations are computed, a series of gates is applied using those rotations following the logic in the code.
self.positionalWires
is a range of wire indexes (e.g. [0, 1, 2, 3]).
Images are 8-bit encoded, so the maximum value K of the pixels is 255.
Inputs are, of course, the images. Specifically, inputs is a tensor of flattened images with shape [batch_size, width*height]
. For reference, I am currently using the MedMNIST dataset, which has both grayscale and RGB images with shape 28x28. So for reference my inputs would be [batch_size, 784]
.
Consider the function statePreparation
as a step in a circuit
function:
def circuit(self, inputs, weights):
self.statePreparation(inputs)
self.anotherManipulation(weights)
...
return [qml.expval(qml.PauliZ(self.colorEncodingWire))]
the circuit is then used in a QNode:
self.dev = qml.device(name = backend, wires = self.circuitWires)
self.qnn = qml.qnn.TorchLayer(qml.QNode(self.circuit, self.dev, interface = "torch", diff_method = diffMethod), weight_shapes = weightShapes)
I omit the PyTorch code as I do not have problems with it.
The questions are the following:
- Is it correct to broadcast values for batched inputs using
encodingThetas[:, i]
? I have seen this notation in a past discussion (I’ll try to recover the link to it) and in the release notes, but I want to be sure I am not actually broadcasting all the encoding parameters to the same circuit and I am retaining the “batched” behaviour of classical neural networks. - Is there a solution to avoid the for loop over each tensor’s features? Maybe defining a small ansatz for the routine applied to each pixel and then broadcast this ansatz to each pixel? Execution times are pretty high and I would like to optimize wherever I can. I already tried different backends and diff_methods using insights from other discussions, but for now my most performing setup is “default.qubit” as
backend
and “backprop” asdiff_method
.
I hope I have been clear enough explaining my issue.
Thank you for your help in advance! Hope to hear from you soon
Pennylane Version:
Name: PennyLane
Version: 0.38.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/lollo/miniconda3/envs/pennylane/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, toml, typing-extensions
Required-by: PennyLane_Lightning, PennyLane_Lightning_GPU
Platform info: Linux-6.9.3-76060903-generic-x86_64-with-glibc2.35
Python version: 3.9.20
Numpy version: 1.23.5
Scipy version: 1.13.1
Installed devices:
- default.clifford (PennyLane-0.38.0)
- default.gaussian (PennyLane-0.38.0)
- default.mixed (PennyLane-0.38.0)
- default.qubit (PennyLane-0.38.0)
- default.qubit.autograd (PennyLane-0.38.0)
- default.qubit.jax (PennyLane-0.38.0)
- default.qubit.legacy (PennyLane-0.38.0)
- default.qubit.tf (PennyLane-0.38.0)
- default.qubit.torch (PennyLane-0.38.0)
- default.qutrit (PennyLane-0.38.0)
- default.qutrit.mixed (PennyLane-0.38.0)
- default.tensor (PennyLane-0.38.0)
- null.qubit (PennyLane-0.38.0)
- lightning.qubit (PennyLane-Lightning-0.38.0)
- lightning.gpu (PennyLane-Lightning-GPU-0.38.0)