QTape within Qnode

Hi Team,

Is there any way to include Tape in QNode. For example, if I made a set of operations and measurements in a tape and I want to execute that tape parallel way using QNode. I can execute tape in a parallel way using the execute command, but how it can be executed in a qNode? Or, is it possible to do so?

Please let me know your perspectives.

Below is the code:

import pennylane as qml
from pennylane import numpy as np

wires=20
dev=qml.device("default.qubit")

def circuit(params):
    tapes=[]
    for j in range(5):
        ops=[]
        for i in range(wires):
            ops.append(qml.RY(params[j][i], wires=i))
        for i in range(wires):
            ops.append(qml.CNOT(wires=[i, (i+1)%wires]))
        meas=[qml.expval(qml.PauliZ(wires-1))]
        tape=qml.tape.QuantumTape(ops, meas)
        tapes.append(tape)
    return tapes


params=[np.random.random(wires) for i in range(5)]

qml.execute(circuit(params), dev) #This code gives output properly

q_node=qml.QNode(func=circuit, device=dev)

q_node(params) #Throwing errors

The Error:

ValueError: ops operation RY(tensor(0.63808942, requires_grad=True), wires=[0]) must occur prior to measurements. Please place earlier in the queue.

Also is there any other way to execute tapes in qnn.TorchLayer except making the circuit to qNode?

Thanks

Hey @roysuman088,

You can’t create a QNode like this:

q_node=qml.QNode(func=circuit, device=dev)

where circuit returns a bunch of tapes. The function needs to be a quantum function, which implies that it returns a measurement process (like qml.state() for example). Although that’s not technically the cause of your error, let’s come up with something entirely different because I don’t think your approach of creating a bunch of tapes and creating QNodes from those tapes is going to work.

Every QNode has a qtape attribute which returns a quantum tape (QNodes work by tape construction under the hood in PennyLane). So, if you wanted to have like-to-like tapes and QNodes that are executable in their own rights, you could do this:

num_qubits = 2

dev = qml.device("default.qubit")

@qml.qnode(dev)
def qnode(params):
    for i in range(num_qubits):
        qml.RY(params[i], wires=i)
        
    return [qml.expval(qml.PauliZ(i)) for i in range(num_qubits)]

def make_tapes(num_tapes, params_set):
    tapes = []
    for t in range(num_tapes):
        qnode(params_set[t])
        tapes.append(qnode.qtape)
    return tapes

num_tapes = 4
params_set = np.random.uniform(0, 1, size=(num_tapes, num_qubits))

tapes = make_tapes(num_tapes, params_set)

for tape in tapes:
    print(qml.execute([tape], dev))
[(tensor(0.88383541, requires_grad=True), tensor(0.99637956, requires_grad=True))]
[(tensor(0.80456514, requires_grad=True), tensor(0.91188955, requires_grad=True))]
[(tensor(0.7377465, requires_grad=True), tensor(0.9468577, requires_grad=True))]
[(tensor(0.7169364, requires_grad=True), tensor(0.73276281, requires_grad=True))]

I guess this is kind of the reverse of what you were doing. I’m curious to know what your application is, though!

For the question of running things in parallel, there’s a section for this in the documentation for default qubit: Accelerate calculations with multiprocessing

Also is there any other way to execute tapes in qnn.TorchLayer except making the circuit to qNode?

TorchLayer has to receive a QNode :slight_smile:

Thanks @isaacdevlugt for the suggestion. I’m also thinking the same. Actually Im looking for a same VQC circuit to execute in a parallel way for different parameters under a Torch Layer. So is it possible to do so?

@roysuman088 you’re referring to broadcasting! :slight_smile: This is already a feature:

import pennylane as qml
from pennylane import numpy as np

import torch

n_qubits = 2
dev = qml.device("default.qubit.torch", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

batch_size = 10
inputs = torch.rand((batch_size, n_qubits))
qlayer(inputs)

with qml.Tracker(dev) as tracker:
    qlayer(inputs)

tracker.totals
{'executions': 1, 'batches': 1, 'batch_len': 1}

As you can see, the model was evaluated over 10 inputs but the device had to execute something only once :slight_smile:

Thanks @isaacdevlugt , for the suggestion. It seems the batch execution from the qnn.TorchLayer actually handling what I think. For my problem, I’m passing batches over the layer and each batch needs to execute the same circuit multiple times, which I need in a parallel way. So what will be your suggestion in this scenario?

I’m not sure I understand your application 100%. Could you show some code that demonstrates what you’re trying to do?

Hey @isaacdevlugt Its difficult to show the code, as it is a bit complex, but I can try to make a flow for understanding. We create a DataLoader using batch_size for our Neural Network Model. In our model, there is a Torch Layer that uses VQCs for training the model. But its not a single VQC in the layer, we are using 5 VQCs within the torch layer, which are the same VQC with different weights. So in a batch, 5 VQCs are running sequentially for the time being, but we want them to execute in parallel as the input is same, the weights are different. So that’s the concern for our work.

I’ll be trying to share a demo code but please let me now if you can understand the concern.

Thanks

Okay, I think I understand what you’re trying to do but let me re-explain in my own words and you can tell me if we’re on the same page :slight_smile:

You have a hybrid neural network with 5 separate quantum layers. Let’s say you have a batch size of 5 for simplicity. What you want to be able to do is pass one data point in the batch (of which there are 5) to one of your layers, another data point in the batch to another layer, and so on. You’d also like to be able to do this in parallel.

Is that correct?

Not 5 separate quantum layers…it’s only one quantum layer which have 5VQCs with same inputs and different weights. So I want to execute these 5 VQCs in parallel…so let’s just say if a batch size is 5…so for single batch data it’s going through 5VQCs sequentially now…but we want them in parallel…

Just let me know if we are in the same page.

Also I like to know the approach of your suggestion along with above…

Thanks.

1 Like

Is there a reason why this needs to be looked at / formulated as 5 VQCs? What if you just combine them into one circuit and reuse the inputs in the right places? :thinking:

Hi @isaacdevlugt , Can you please provide some more details on your suggestions? How to provide different parameters in a combined circuit?

Also, I’m looking into parameter broadcasting where the parameters are passing as a list in the gates as per the number of batch_size, but I don’t understand how it actually works, because the circuit is generating for one time and only a single execution occurred. Can you please also highlight this point?

Thanks

Hi @isaacdevlugt …any suggestions on the above point??

Hi @roysuman088 !

Sorry for the delay in my response. Isaac hasn’t been available for the past couple of days but I’ll do my best to respond.

I don’t have the full context of the question but I understand that it’s mostly about parameter broadcasting. The simulation is indeed done in a single execution. You can find more details in the QNode section of the documentation. Note that if not natively supported by the underlying device, parameter broadcasting may result in additional quantum device evaluations.

Does this answer your question? Or do you have additional questions?