Hey @SuFong_Chien,
Good questions!
How do you judge the cutoff =4 is good enough? It is refer to number of phtons in the fock stage right? If I am not mistaken the function fitting example set to 10.
Yes, the cutoff refers to the number of photons that are kept track of in each mode (strictly, a cutoff of 4 means that we track the 0, 1, 2, 3 photon states). Ideally, the higher the cutoff the better, allowing us to have greater levels of squeezing or displacement that will push up the average number of photons. Unfortunately this comes with a big trade-off in simulation speed - the overhead is (cutoff) ^ (modes), so increasing the number of modes is exponentially hard. We can get away with a cutoff of 10 because we have few modes, but going up to 5 modes we may need to compromise. One way to check is to calculate the trace of the output state, if it is significantly below 1 then we know the cutoff is perhaps too low.
What is the purpose of ‘1’ in [4,5,3,2,1]? In this case we need 5 wires to construct the network right?
Ah, the one was just to bring us down to a single mode and have a 1D output for the sake of this prototype, similar to having e.g. a final neuron in a neural network as a binary classifier. The code above should allow you to be free in your choice of widths
.
I am sorry I am a bit lost now. If I want to train this QNN, I must pass train data as input (making loops for shuffling data and etc). After that I use the weight for prediction, am I right? I suppose there is a *.predict() function available in PennyLane, isn’t it?
Training the QNN would look something like:
import pennylane as qml
from pennylane import numpy as np
widths = [4, 5, 3, 2]
wires = max(widths)
cutoff = 4
seed = 1967
dev = qml.device("strawberryfields.fock", wires=wires, cutoff_dim=cutoff)
@qml.qnode(dev)
def qnn(inputs, weights):
qml.templates.DisplacementEmbedding(inputs, wires=range(wires))
for weight, width in zip(weights, widths):
qml.templates.CVNeuralNetLayers(*weight, wires=range(width))
return qml.expval(qml.X(0)), qml.expval(qml.X(1))
weights = [qml.init.cvqnn_layers_all(n_layers=1, n_wires=width, seed=seed) for width in widths]
inputs = np.random.random(wires)
outputs = np.random.random(widths[-1])
opt = qml.GradientDescentOptimizer(stepsize=0.4)
def cost(weights):
return np.sum((qnn(inputs, weights) - outputs) ** 2)
print("Example weight before: ", weights[-1][0])
for i in range(2):
weights = opt.step(cost, weights)
print("Example weight after: ", weights[-1][0])
Possible to print out the network diagram like what we can do in IBM qiskit?
We have a circuit drawer that can be output using print(qnn.draw())
. However, it is a text-based drawer and the circuit in this case might be a little too deep, so that the output doesn’t fit nicely on your screen.
There is the last “out of scope” question for you . Most of the engieering work like to ask about the Big-O for a proposed algorithm.What is the Big-O for XANADU CV quantum neural network? I have studied two other different types of QNNs. For example, paper written by ‘Quantum algorithms for feed forward neural networks’ by J. Allcock et al do describe the Big-O. However, there is no description about Big-O for the paper ‘Training deep quantum neural networks’.
This is still a bit of an open research question, especially for near-term algorithms. A lot of algorithms do show speed ups or improved data capacity, but these algorithms tend to require quite deep circuits and error correction. Instead, more “near-term” algorithms focus on the variational approach: having a fixed circuit of limited depth and altering the parameters. In that case the scaling is less clear, but we might be expecting things like an improved quality of training (although, this needs to be formalized).