QNN KerasLayer with Model

Hello,

Let’s say I already have a keras Model (call it classical_nn) into which I want to add a qnn.KerasLayer at the end.

Classically in tensorflow, one would add layers to a keras Model as :
classical_nn = Model(…)
pre_out = classical_nn.layers[-1].output
pre_out = Dense(16,activation=‘tanh’) (pre_out)

And let’s say I want to add a qnn.KerasLayer set up as:
qlayer = qml.qnn.KerasLayer(quantum_classifier, weight_shapes, output_dim=1)

How one would connect qlayer and pre_out in this case to obtain a keras.Model?

Thanks in advance for your support :slight_smile:

Hey @cnada,

PennyLane’s KerasLayer is a subclass of the Layer class in Keras. This means that you can treat it just like any other layer in Keras.

For example, suppose we turn our QNode into a Keras Layer:

qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=2)

We can use the Sequential model to create a hybrid:

clayer = tf.keras.layers.Dense(2)
clayer2 = tf.keras.layers.Dense(2)
model = tf.keras.models.Sequential([clayer, qlayer, clayer2])

We can also use Keras’ functional API approach:

inputs = tf.keras.Input(shape=(2,))
x = tf.keras.layers.Dense(2, activation="tanh")(inputs)
x = qlayer(x)
outputs = tf.keras.layers.Dense(2)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)

You can also add qlayer to an existing model using:

  • model.add(qlayer) is using a Sequential model
  • If using the functional API approach:
outputs2 = qlayer(outputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs2)

Hope this helps!

Thanks @Tom_Bromley. And if I may ask, when doing x = qlayer(x), if I get the error:
AttributeError: ‘tuple’ object has no attribute ‘layer’

Do you know what it means? I have no clue here.

No problem! Would you be able to share the code you’re running so I can take a closer look?

@Tom_Bromley Here is an example code replicating the error.error_pennylane_classical_net_quantunlayer.py (3.9 KB)

Thanks for sharing! I managed to get an output from the hybrid network by changing the script from line 100 to:

import pennylane as qml

n_qubits = 5
dev = qml.device('default.qubit', wires=n_qubits)

@qml.qnode(dev)
def quantum_classifier(inputs, weights):
    
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    
    return qml.expval(qml.PauliZ(0))

weight_shapes = {"weights": (2, n_qubits, 3)}

qlayer = qml.qnn.KerasLayer(quantum_classifier, weight_shapes, output_dim=1)
flat = tf.keras.layers.Flatten()
downsize_layer = tf.keras.layers.Dense(5)
x = downsize_layer(flat(model.output))
outputs = qlayer(x)

hybrid_model = Model(inputs=model.input, outputs=outputs)

import numpy as np
in_data = np.random.random((1, 128, 128, 1))
hybrid_model(in_data)

Note that we need to flatten the output of the classical network and shrink it down to a 5-dimensional input for the quantum layer. This could probably be done with a few more fully connected intermediate layers.

However, it’s also important to bear in mind that the quantum layer will add a simulation overhead to your network, due to the overhead of simulating a quantum system. Combined with the size of your classical network and resultant number of parameters (2,161,361), it will take a long time to even perform one optimization step. I’d recommend starting out with a smaller scale model as more of a prototype.

Gotcha. Thanks for the much appreciated tips. Maybe also make the previous layers non trainable is another way.

1 Like