QAOA Embedding Layer

Hi! I’m working on a forecasting hybrid model and I’m using QAOA Embedding, may i know more about how to decide on how to find out the optimum number of layers (n_layers) in the embedding? Also, I am using Angle Embedding and Basic Entangler Layer alongside with the QAOA embedding layer.

Hi @aouie, welcome to the forum!

The more layers you have, the more complexity you will be able to represent with your quantum circuit. However a large number of layers can have drawbacks such as training difficulty, and it can also lead to overfitting. It’s a true craft to find the minimum number of layers that will model your data well. This number of layers will depend on the problem, the data, the training, etc.

The QAOA embedding will embed the features in your data into the circuit. It’s not necessary to use the angle embedding (or another embedding) together with it. In fact I’m thinking it might be counterproductive to use it. The same goes for the Basic Entangler Layer, it will add additional parameters and structure to the circuit but it’s not necessary to have it. Regardless, it’s interesting to explore different configurations, and maybe exploring this very complex ansatz can help you find a good model for a particular problem.

I hope this helps! Please let me know if you want me to clarify something or if you have other questions.

1 Like

I am using this code for the quantum layer of my hybrid neural network:

n_qubits =12
dev = qml.device(“default.qubit”, wires=n_qubits)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(n_qubits))
qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
def circuit(weights, f):
QAOAEmbedding(features=f, weights=weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range (n_qubits)]
n_layers = 2
weight_shapes = {“weights”: (n_layers, n_qubits)}
print(weight_shapes)
weights = np.random.random(QAOAEmbedding.shape(n_layers, n_wires=2))
features = np.array([1., 2.])
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)

in this code, I can’t understand the concept of the weights and features. I am using 12 qubits and 12 features in my network.

Thank you

Hi @aouie great question.

The goal with quantum machine learning in general is to make predictions from large sets of data. For this we usually need to generate a model of our data, which can tell us what output to expect for a specific input.

Let’s consider the example of buying a house. I want to predict how much a house will cost. A house has many features such as the size, the distance from the city centre, and the number of windows. Most likely the number of windows will not affect the price of the house so I can keep just two features: size and distance. My goal will be to predict the cost of a house depending on this features. The size and the distance are two features that I can encode into my hybrid neural network, and I can use many kinds of embeddings to achieve this.

Now we can explore the concept of weights. In a neural network each edge is assigned a weight that can be changed or trained so that at the end we get a model that we can use to predict the price of a new house. You can learn more about weights and quantum machine learning in this blog post.

If you want to use the QAOA embedding you will need at least as many qubits as the number of features you have, but you can have many more weights.

Ideally you will need a few number of weights to model your problem, but complex problems may need many weights.

I hope this helps! Please let me know if you have any further questions.

Oh okay! Then what about the number of layers and wires?
And also, if I remove the angle embedding layer and the basic entangler layers, I get an error ‘NameError: name ‘qnode’ is not defined’
Here is my code:
n_qubits =10
dev = qml.device(“default.qubit”, wires=n_qubits)
@qml.qnode(dev)
# def qnode(inputs, weights):
# qml.AngleEmbedding(inputs, wires=range(n_qubits))
# qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
# return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
def circuit(weights, f):
QAOAEmbedding(features=f, weights=weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range (n_qubits)]
n_layers = 2
weight_shapes = {“weights”: (n_layers, n_qubits)}
print(weight_shapes)
weights = np.random.random(QAOAEmbedding.shape(n_layers, n_wires=10))
features = np.array([1., 2.])
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)
‘NameError: name ‘qnode’ is not defined’

Hi @aouie, the reason why you get that error is because you commented the line # def qnode(inputs, weights):. You should move @qml.qnode(dev) to right before def circuit(weights, f):, and you should change “qnode” to “circuit” when you call your Keras layer.

That being said, my kernel keeps dying when I try to run your circuit so I’ll have to dig deeper to understand how to help you make your code work.

Please do try the suggestions I mentioned and let me know if your Kernel is dying too or if it’s just me.

If I do that:
n_qubits =10
dev = qml.device(“default.qubit”, wires=n_qubits)
@qml.qnode(dev)
def circuit(weights, f):
QAOAEmbedding(features=f, weights=weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range (n_qubits)]
n_layers = 2
weight_shapes = {“weights”: (n_layers, n_qubits)}
print(weight_shapes)
weights = np.random.random(QAOAEmbedding.shape(n_layers, n_wires=10))
features = np.array([1., 2.])
qlayer = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim=n_qubits)
I get this error:
TypeError: QNode must include an argument with name inputs for inputting data
My kernel is not dying if I run it on gpu but if I use my laptop, after a few epochs it dies.

I’m glad it’s not dying on your GPU @aouie.

What the error is telling you is that your qnode (called circuit in this case) needs to receive a variable called “inputs” which contains all non-trainable inputs. In this case your “f” input should be called “inputs” since you don’t want to optimize over this data. Then in your QAOAEmbedding be sure to assign “features” to “inputs”.

This is how it should look:

def circuit(weights, inputs):
    QAOAEmbedding(features=inputs, weights=weights, wires=range(n_qubits))

Let me now how it goes with this change!

Hello! I’ve tried it and it works but if I fit it in my model, I encounter an error. Here is my code:
n_qubits =10
dev = qml.device(“default.qubit”, wires=n_qubits)
@qml.qnode(dev)
def circuit(weights, inputs):
QAOAEmbedding(features=inputs, weights=weights, wires=range(n_qubits))
return qml.expval(qml.PauliZ(0))
n_layers = 2
weight_shapes = {“weights”: (n_layers, n_qubits)}
print(weight_shapes)
weights = np.random.random(QAOAEmbedding.shape(n_layers, n_wires=2))
features = np.array([1., 2.])
qlayer = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim=n_qubits)
clayer_1 = tf.keras.layers.Dense(10, activation=‘sigmoid’)
clayer_2 = tf.keras.layers.Dense(24, activation=“linear”)
model = Sequential()
model = tf.keras.models.Sequential([clayer_1,qlayer, clayer_2])
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(opt, loss=“mse”, metrics=[“accuracy”])
history = model.fit(X_train, y_train, epochs=100, batch_size=32, validation_split=0.25, verbose=2, shuffle=True)
This is the error message: ValueError: Exception encountered when calling layer “sequential_1” (type Sequential).

Input 0 of layer “dense_1” is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (32,)

Call arguments received:
• inputs=tf.Tensor(shape=(32, 10), dtype=float32)
• training=True
• mask=None
But if I use the angle embedding and basic entangler, I dont get this error

Hi @aouie, I’m trying to replicate your problem but my kernel keeps dying. I’ll try to run this on a different machine and get back to you with some help. In the meantime, if angle embedding and basic entangler work for. you then I suggest you use those instead.

Hello! I can run my program now with just using the QAOA embedding layer as my quantum layer in my hybrid model. Regarding the circuit depth, does this depends on the number of wires (parameter) because that is the only parameter that is iterable, the others are tensors.

Hi @aouie, I’m glad you managed to get it running!

The circuit depth depends on the number of layers. The more layers you have the more depth your circuit will have. This in turn will increase the number of parameters in your circuit.

Thank you so much for this! I’m off to the next step now! I will change my classical model into a deep learning model.

That’s great @aouie! Let us know how it goes!

Hello! I tried to add it in my new model but I don’t understand why the parameter is 0(unused) in my qlayer.
Model: “sequential”


Layer (type) Output Shape Param #

gru (GRU) (None, 48, 36) 4212

gru_1 (GRU) (None, 48, 36) 7992

gru_2 (GRU) (None, 48, 36) 7992

gru_3 (GRU) (None, 10) 1440

keras_layer (KerasLayer) (None, 10) 0 (unused)

dense (Dense) (None, 24) 264

=================================================================
Total params: 21,900
Trainable params: 21,900
Non-trainable params: 0


Hi @aouie,

It looks like it’s related to this post. The solution seems to be to do a forward pass through the model before printing the summary.

Let me know if this helps!

Hello! Thank you! the forward pass worked!
Model: “sequential”


Layer (type) Output Shape Param #

gru (GRU) (None, 48, 36) 4212

gru_1 (GRU) (None, 48, 36) 7992

gru_2 (GRU) (None, 48, 36) 7992

gru_3 (GRU) (None, 10) 1440

keras_layer (KerasLayer) (None, 10) 20

dense (Dense) (None, 24) 264

=================================================================
Total params: 21,920
Trainable params: 21,920
Non-trainable params: 0


I’m glad it worked @aouie, enjoy using PennyLane!