Drawing BasicEntanglerLayer circuit

Hello,

I am unable to print/draw the circuit when using BasicEntanglerLayer.
Can anyone help me with that. Below is a sample code;

import pennylane as qml

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs,weights):
    for i in range(n_qubits):
        qml.Hadamard(wires=i)
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}

X=[1,2]

qml.draw_mpl(qnode)(X,weight_shapes)

When we are able to draw the above circuit, is it possible to print the inner structure of AngleEmbedding instead of a block and same for BasicEntanglerLayer.

Thanks in advacne

Hi @Muhammad_Kashif! This is because the drawer is not taking into account the device gate set when drawing the QNode. To force it to decompose gates to match the device, we must specify expansion_strategy="device":

import pennylane as qml

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
 for i in range(n_qubits):
     qml.Hadamard(wires=i)
 qml.AngleEmbedding(inputs, wires=range(n_qubits))
 qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
 return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 6
weights = np.random.random([n_layers, n_qubits])
X = [1, 2]
>>> print(qml.draw(qnode, expansion_strategy="device")(X, weights))
0: ──H──RX(1.00)──RX(0.42)─╭C──RX(0.29)─╭C──RX(0.50)─╭C──RX(0.25)─╭C──RX(0.06)─╭C──RX(0.51)─╭C─┤  <Z>
1: ──H──RX(2.00)──RX(0.57)─╰X──RX(0.19)─╰X──RX(0.54)─╰X──RX(0.66)─╰X──RX(0.18)─╰X──RX(0.60)─╰X─┤  <Z>

hi @josh,

Thank you for answering.
May I please ask, why can’t we draw the circuit when we use weight_shapes like weight_shapes = {"weights": (n_layers, n_qubits)} , taken from QNN tutorial here

In the above example (you rectified for drawing the circuit) RX (from Angle Encoding) encodes the input X and the parametrized gate ofBasicEntanglerLayer' takes the weight. In QNN (tutorial referenced above) context shouldnt parametrized gates after encoding takes the value of encoded data instead of weights? How can we draw the circuit in that case when we only tell the weight_shapes`?

Thanks

Hi @Muhammad_Kashif, good question.

Notice that in the example you’re referencing, weight_shapes is passed into a Keras layer, not directly into the BasicEntanglerLayers. If you check the docs for BasicEntanglerLayers you will notice that the input for “weights” must be a tensor.

Let me know if this answers your question!

Hi @CatalinaAlbornoz,

Thank you for the answer, however I am still a little confused like how the KerasLayer treats these weight_shapes or what actually are the weights (to optimize in QNNs). I get that the weight_shapes tells the KerasLayer about the shape of weights like if weight_shapes = (1,2), the weight_shapes will be 1 (parameter) for two parametrized gates (number of qubits).
Now, for instance, in case of AngleEmbedding the input data is encoded into RX, RY or RZ rotations, clear so far. The next we have for instance BasicEntanglerLayer circuit containing rotations (RX, RY or RZ) and entanglements.
The question is:

  1. The rotations here in BasicEntanglerLayer circuit takes the values of weights which we can print and see if we pass actual values (of weights) instead of weight_shapes but we can not print the circuit and see the corresponding parameters of rotations in BasicEntanglerLayer, so we don’t know what parameters or weights would be passed after the embedding step.

Shouldn’t the parametrized gates after the embedding the should take the the embedded features as parameters instead of weights, I am not sure if this is happening in case of weight_shapes passed to KerasLayer.

Can you please guide with that.
Thanks

Hi @Muhammad_Kashif, I’m not sure that I understand your question.

A few messages above, Josh showed that by using expansion_strategy="device" you can show the parameters of your BasicEntanglerLayers. These parameters are effectively one-parameter single-qubit rotations, where the rotation angle is given by a particular weight. Basically the weight, parameter and rotation angle are the same in this case.
We can’t use weight_shapes because what we need is the actual parameters (the weights).

If you look closely at the QNN demo you will notice that the weight shapes are necessary to create the quantum layer, but what goes into the qnode are the weights themselves.

About your second question I sense there might be some confusion. When we refer to features we refer to your data, which is not trainable. This data is embedded into your quantum circuit using AngleEmbedding. You then need an ansatz with parametrized gates and in this case we use BasicEntanglerLayers. So basically BasicEntanglerLayers takes parameters in the form of weights, and it’s AngleEmbedding that actually embeds your features (your data) into the circuit.

I hope this is clear. Please let me know if this helps!

Hi @CatalinaAlbornoz,

Thanks for the clarification. One last thing the expansion_strategy ="device does show the internal structure of Template Layer but as you rightly said, for that we need to pass the actual values of weights and not weight_shapes. Let me try and explain my confusion with an example;
For instance, if I have 2 qubits and my data points =[a,b] and weights = [c,d],. My QNode contains the AngleEmbedding layer which takes data points [a,b] and make them trainable for the following variational circuit, which lets say is BasicEntanglerLayer in this case. Now the parameters for parametrized gates in my variational circuit should be the embedded datapoints but instead they take the weights ([c,d]), as defined above, so how does data embedding play the role here or where does the embedded data go?
I am assuming that, in QNN context we only tell the weight_shapes while converting Qnode to KerasLayer and not the actual weights because in this case the weights will actually be the embedded data points/features, am I right?

I hope I have made my question clear.
Thanks

Hi @Muhammad_Kashif,

I believe there’s still some confusion as to what a data point is, what a weight is and how the embedding and templates work. I’ll revert to a basic example. Another very good explanation is detailed in this blog post on how to start learning quantum machine learning (I strongly encourage you to read the blog post.)

I want to predict the change in value of a house based on the distance to the city centre. I have this information for a few houses that are at X = 0, 1, 2, 3, 4, and 5 km from the city centre. Their respective change in value are Y = 0, 0.84, 0.91, 0.14, -0.76 and -0.96.

I want to use a quantum circuit to predict the change in value located at any distance from the city centre. What do I do?

I first create a qnode

@qml.qnode(dev)
def circuit(some inputs):
    #I'll add something here
    return something

What inputs will my circuit need? It will need the distance from the city centre, which is my datapoint. Notice that this data is fixed, I don’t train over it. I will also need some parameters that I can put into my circuit, and which I can modify after each iteration, hoping to get a better model for my data.

My return value will be an expectation value, which will give me a prediction for the change in value of a house at the particular distance specified by the input datapoint.

This is how the qnode will be looking:

@qml.qnode(dev)
 def circuit(distance, parameters):
        # something that takes the distance
        # something that takes some trainable parameters
        return qml.expval(qml.PauliZ(0))

Now how do we put a distance into a quantum circuit? We put the classical data points into the quantum circuit by using an embedding. In this case we can use the angle embedding to plug my data into my circuit. This will basically create a rotation for every one of my features. In this case I only have one feature which is distance. But I could have had other features such as area of the house, number of bedrooms, and others.

And where do the trainable parameters go? I can put them into my circuit as angles in rotation gates, where these angles can in fact change. I can also add other gates such as CNOTs and Hadamards to add more complexity to my circuit. I can use BasicEntanglerLayers to do this in an easy way. But what about the weights? My parameters are the weights!

So now we have

@qml.qnode(dev)
 def circuit(distance, parameters):
        qml.AngleEmbedding(distance, wires=range(n_qubits))
        qml.BasicEntanglerLayers(parameters, wires=range(n_qubits))
        return qml.expval(qml.PauliZ(0))

Now lets get some initial values for the distance and parameters and draw the circuit:

n_qubits = 2
dev = qml.device('lightning.qubit',wires=n_qubits)
@qml.qnode(dev)
def circuit(distance, parameters):
    qml.AngleEmbedding(distance, wires=range(n_qubits))
    qml.BasicEntanglerLayers(parameters, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(0))

n_layers = 6
weights = np.random.random([n_layers, n_qubits])
X = [1.5]

qml.draw_mpl(circuit,expansion_strategy="device",decimals=1)(X,weights)

As you can see I have a 2-qubit circuit where the first RX rotation is the embedding or encoding of my data (the distance) and then I have 6 layers of parametrized rotations and CNOTs.

I encourage you to first try out variational circuits without using Keras and then you can try adding Keras Layers into the mix.

Please let me know if this explanation was helpful or if you have other questions.

Thank you. Please also note that qml.BasicEntanglerLayers documentation should state:

rotation (pennylane.ops.Operation) – one-parameter single-qubit gate to use, if None, qml.RX is used as default

instead of:

rotation (pennylane.ops.Operation) – one-parameter single-qubit gate to use, if None, RX is used as default