Extracting weigths of a Qnode Circuit object or the TorchLayer object?

Hi,

I wanted to know if there is a way to access weights of a Qnode circuit object or the weight_shape information on the TorchLayer object created from a qnode.

I tried accessing the qtape object on my qnode to access the circuit and its parameters (which are the weights in my case) as far as I understood but that doesn’t work and does not make sense.

Kindly help. :slight_smile:

Thank you!

Hey @kamzam! Welcome to the forum :muscle:

Great question! TorchLayer behaves exactly like native layers in PyTorch. So, you can use model.parameters() :slight_smile:. Here’s an example:

import pennylane as qml
import torch

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

n_layers = 2
weight_shapes = {"weights": (n_layers, n_qubits)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

clayer_1 = torch.nn.Linear(2, 2)
clayer_2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
layers = [clayer_1, qlayer, clayer_2, softmax]
model = torch.nn.Sequential(*layers)

for param, layer in zip(model.parameters(), model.named_children()):
    print(param, layer)
Parameter containing:
tensor([[ 0.0943, -0.2240],
        [ 0.4012, -0.4323]], requires_grad=True) ('0', Linear(in_features=2, out_features=2, bias=True))
Parameter containing:
tensor([0.5016, 0.2354], requires_grad=True) ('1', <Quantum Torch Layer: func=qnode>)
Parameter containing:
tensor([[6.2790, 2.8270],
        [2.4543, 5.2021]], requires_grad=True) ('2', Linear(in_features=2, out_features=2, bias=True))
Parameter containing:
tensor([[ 0.4883,  0.1023],
        [-0.3336, -0.3181]], requires_grad=True) ('3', Softmax(dim=1))

Hope this helps! :smile:

2 Likes

Thanks Isaac, yes this makes sense. :slight_smile:

1 Like

Awesome! Glad I could help. Let us know if you have any other questions :rocket:

@isaacdevlugt This only seems to print the biases for the qlayer. Is there any way to print the weights as well?

Hi @Sarvapriya_Tripathi ,

You could try printing the weights and bias for each layer with the code below. Note that the quantum layer doesn’t have a “weight” attribute, but a “weights” attribute instead. It also doesn’t have a “bias” attribute.

I’ve used a try-except code just for ease, and there may be more efficient ways of doing this.

# calculate weights and bias for all layers except the softmax
for i in range(len(layers)-1):
  # try to obtain the weights for the layer
  try:
    print(f"Layer {i} weights \n",model[i].weight)
  except:
    try:
      print(f"Layer {i} weights \n",model[i].weights)
    except: print(f"Layer {i} has no attribute 'weight' or 'weights'")
  
  # try to obtain the bias for the layer
  try:
    print(f"Layer {i} bias \n",model[i].bias)
  except: print(f"Layer {i} has no attribute 'bias'")

Let me know if this helps!