Different parameters for each qnode in qml.QnodeCollection


The documentation for qml.QnodeCollection says all qnodes have to share the same parameters. I am looking to get some more insight to how this works.

I am running a pytorch experiment with several of the same qnodes that take in the same input vector. I want each circuit to have its own unique randomly distributed weights (i.e separate calls of torch.randn), and also for each circuit to update its own weights throughout training. Is this possible?

Thanks! :slight_smile:

Hi @James_Ellis! This is not possible currently using QNode collections, but something we are working to support :slight_smile:

For now, you can index into the QNode collection and evaluate each QNode with the required input vector:

for qnode, input_vector in zip(qnodes, inputs):
1 Like

Thanks again for the help @josh,

Seem to have a problem with the weights no longer training now. Is there anything different I should be doing? For reference, my code roughly follows the pennylane transfer learning tutorial.


Hi James, could you post a small/minimal non-working example and the corresponding traceback?

That will help us narrow down the cause.

class Generator_Quantum(nn.Module):
def __init__(self, n_qubits, q_depth, q_delta=0.1):
    This is the quantum generator as described in https://arxiv.org/pdf/2010.06201.pdf
    self.q_params = nn.ParameterList([nn.Parameter(q_delta * torch.randn(q_depth * n_qubits)) for i in range(8)])
    self.n_qubits = n_qubits
    self.q_depth = q_depth
    # Spread of the random parameters for the paramaterised quantum gates
    self.q_delta = q_delta
    device = qml.device('lightning.qubit', wires=self.n_qubits)
    # This is just a class with a simple circuit function - obtained by quantum_sim.circuit()
    self.quantum_sim = QuantumSim(n_qubits, q_depth)

    self.qnodes = qml.QNodeCollection(
        [qml.QNode(self.quantum_sim.circuit, device, interface="torch") for i in range(8)]

def forward(self, noise):

    q_out = torch.Tensor(0, 8* (2**self.n_qubits))
    q_out = q_out.to(device)

    # Apply the quantum circuit to each element of the batch and append to q_out
    for elem in noise:
        patched_array = np.empty((0, 2**self.n_qubits))    
        for p, qnode in zip(self.q_params, self.qnodes):
            q_out_elem = qnode(elem, p).float().detach().cpu().numpy()
            patched_array = np.append(patched_array, q_out_elem)
        patched_tensor = torch.Tensor(patched_array).to(device).reshape(1, 8* (2**self.n_qubits))  
        q_out = torch.cat((q_out, patched_tensor))

    return q_out

Let me explain this line
q_out = torch.Tensor(0, 8* (2**self.n_qubits))
The 2 ** self.n_qubits comes from the circuit output being qml.probs() on all qubits. The 8 is because I am trying to have 8 lightning.qubits run at the same time for a concatenated output.

On each forward pass, a single vector ‘noise’ of size n_qubits, is passed to each 8 nodes.

`for elem in noise:`

This for loops over the batch

I hope I made it clear… if not I can provide more code or explanations if I forgot to define anything.

Thanks for your help!

Sorry forgot to say, there is no traceback.

Hi @James_Ellis. Based on the functions you posted above, it is not easy to find out why the weights cannot be trained. Could you please post the full working version of your code which provides actual output? Thanks.

I have uploaded the files to github https://github.com/jamesellis1999/qgan

The QnodeCollection is in a file called ‘generator.py’.

The experiment can be run by ‘python main.py’

I verified that the weights aren’t changning by running

for p in netG.parameters(): 

I can confirm the weights changed using a single qnode.

Thank you so much for your help!

Hi @James_Ellis. Thank you for sharing the code. I had a brief look at it, and I can reproduce the issue with the weight not training, although I don’t know exactly why that is. There are a lot of things going on, and it’d take quite a bit of effort to understand and pinpoint the exact issue.

It would be best if you could try to find out exactly what part of the code that causes this and attempt to create a minimal example where it still happens e.g. just training a smaller QNodeCollection directly, with/without torch. If the issue still persist, feel free to let us know and we can have a closer look at it.