Testing and Training Time of a Hybrid Model

Hello!
I have tried a hybrid model of LSTM and quantum layer. The quantum layer is a combination of embeddings and layers (angle embedding and basic entangler layer; amplitude embedding and basic entangler layer; iqp embedding and basic entangler layer), and QAOA embedding layer. Comparing the training time result of QAOA embedding layer as my qlayer to the combination of embeddings and layers, QAOA was faster but for the testing time, the model with QAOA as the qlayer was longer. Is there a reason for this? I am using lightning qubit simulator and the classical model is optimized by cudnn

Hi @aouie, this is interesting behaviour.

Did you try splitting your training and testing data differently, and seeing if you get the same behaviour? It is possible that the data you’re using for testing is just randomly harder to model. But if the time difference is significant and this is repeated regardless of how you split your data then this is weird. If that’s the case, can you please share a minimal example that shows this behaviour, so that we can try to replicate this?

Hello! I split the data into 80% training and 20% testing. I am using the same number of qubits (10), wires (10) and layers (1), as well. The only difference is the embedding. I have tried to re run the models but the testing and training time is still the same.
image
The keras layer there is my qnn, and the model (functional) is my LSTM.
Here is the sample code of the qlayer:
n_qubits =10
dev = qml.device(“lightning.qubit”, wires=n_qubits)
tf.keras.backend.set_floatx(‘float64’)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(n_qubits))
# qml.IQPEmbedding(inputs, wires=range(n_qubits),pad_with = 0, normalize=True)
qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
n_layers = 1
weight_shapes = {“weights”: (n_layers, n_qubits)}
print(weight_shapes)
features = np.array([1., 2.])
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)

Hi @aouie,

Are you able to mix up the data a little? This can help you understand if it’s the split of your data what’s causing the problem.

Eg: your data is [A,B,C,D,W,Z] and you want to find the position in the alphabet for your data. If you always split it [A,B,C,D] for training and [W,Z] for testing, you may notice that testing always takes longer than training. However if you change the split to maybe something like [C,D,Z,A] for training and [B,W] for testing, then the time it takes may be different. One way of changing this might be changing the seed you’re using, in case you’re using random data.

Also, if you can share your full code I can try to see if I can replicate your results.

I hope this helps!