Turning quantum nodes into Torch Layers

Hello @Tom_Bromley,

Do you have any suggestions to obtain better accuracies using qml.qnn.TorchLayers vs. original models? The code has been implemented into another classical notebook.

Thank you,

Reference:

Hey @kevinkawchak!

What do you mean by “original models”? Are you trying to compare quantum / hybrid models to purely classical ones on the same task?

Yes, this is the case.

Thanks for clarifying! This is a question we get quite often, and unfortunately the answer isn’t very satisfying :sweat_smile:. There are many things that go into making a machine learning model work well for a specific task, including the choice of optimizer, the choice of hyperparameters — learning rate, step size, batch size, etc — the cost function, the model architecture itself, and many more. It’s a tedious task to tweak all of those things and find the right combination.

I also think it’s not good to assume that quantum / hybrid models should train better (better. accuracy, for instance) than a classical model. The question is much more nuanced there. Maria Schuld gave a nice talk on this topic recently, you should check it out!

Hello @isaacdevlugt,

I appreciate the resources. The issue appears to be with implementing the def forward() step from “Creating non-sequential models” into other notebooks. For instance, the demo make_moons dataset for the first qlayer uses: x_1 = self.qlayer_1(x_1)

A hybrid model I am working on returns parameters in the run summary as “qlayer_1 | TorchLayer | 12” using self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)

I would need an analogous solution to “x_1” for the hybrid notebook for def forward(self, pixel_values) in order to fully incorporate the quantum circuit. Thank you.

References:

https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb

Can you reply back with some code that summarizes the issue you’re facing? That would help me understand the problem better :slight_smile:

Hello @isaacdevlugt,

class ViTLightningModule(pl.LightningModule):

def __init__(self, num_labels=10):
    super(ViTLightningModule, self).__init__()
    self.vit = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k',
                                                          num_labels=10,
                                                          id2label=id2label,
                                                          label2id=label2id)
    self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224-in21k and are newly initialized: [‘classifier.weight’, ‘classifier.bias’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:pytorch_lightning.utilities.rank_zero:GPU available: True (cuda), used: True
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:pytorch_lightning.callbacks.model_summary:
| Name | Type | Params

0 | vit | ViTForImageClassification | 85.8 M
1 | qlayer_1 | TorchLayer | 16

85.8 M Trainable params
0 Non-trainable params
85.8 M Total params
343.225 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%
0/2 [00:00<?, ?it/s]

TypeError Traceback (most recent call last)
in <cell line: 15>()
13 model = ViTLightningModule()
14 trainer = Trainer(accelerator=‘gpu’, max_epochs=5) #, callbacks=[EarlyStopping(monitor=‘validation_loss’)])
—> 15 trainer.fit(model)

15 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = ,

TypeError: TorchLayer.forward() missing 1 required positional argument: ‘inputs’

Thanks @kevinkawchak! I still don’t quite understand what the issue you’re facing is. I can’t run your code either since there are dependencies missing. However, your forward pass does look problematic:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

qlayer_1 probably needs some inputs. You are also overriding what outputs is after you calculate self.vit(pixel_values=pixel_values).

If this isn’t the issue, having a complete code example that I can run would help :slight_smile:

Implementing code from Turning quantum nodes into Torch Layers | PennyLane Demos

If you’re just trying to inject some “quantumness” into the code provided in that GitHub repository, then definitely there’s something wrong with your forward pass:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

qlayer_1 probably needs inputs, and you are also overriding what outputs is after you calculate self.vit(pixel_values=pixel_values). If qlayer_1 comes after vit, then you’d need to do something like this:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1(outputs)
    return outputs.logits

The output of vit would then be the input to qlayer_1. Also, I’m not sure that outputs with have a logits attribute, so that might have to be changed as well. Let me know if that helps!

1 Like

Thank you,

Is this what you are suggesting?

class ViTLightningModule(pl.LightningModule):
def init(inputs, self, num_labels=10):
super(ViTLightningModule, self).init()
self.vit = ViTForImageClassification.from_pretrained(‘google/vit-base-patch16-224-in21k’,
num_labels=10,
id2label=id2label,
label2id=label2id)
self.qlayer_1 = qml.qnn.TorchLayer(inputs, qnode, weight_shapes)

Reference:
'https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb

Not quite — I think the key might be to just modify your forward function to this :slight_smile:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1(outputs)
    return outputs.logits

Hello, for the original Torch layer demo running in Colab, it is only allowing for a single qubit to be used with qml.Hadamard OR qml.RX. These are the code and errors for 2 qubits.

    qml.Hadamard(wires=range(n_qubits))
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))

OR

    qml.RX(inputs, wires=range(n_qubits))
    qml.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.BasicEntanglerLayers(weights, wires=range(n_qubits))

ValueError Traceback (most recent call last)
in <cell line: 13>()
18 opt.zero_grad()
19
—> 20 loss_evaluated = loss(model(xs), ys)
21 loss_evaluated.backward()
22

11 frames
/usr/local/lib/python3.10/dist-packages/pennylane/operation.py in init(self, wires, id, *params)
1045
1046 elif len(self._wires) != self.num_wires:
→ 1047 raise ValueError(
1048 f"{self.name}: wrong number of wires. "
1049 f"{len(self._wires)} wires given, {self.num_wires} expected."

ValueError: Hadamard: wrong number of wires. 2 wires given, 1 expected.

OR


ValueError Traceback (most recent call last)
in <cell line: 13>()
18 opt.zero_grad()
19
—> 20 loss_evaluated = loss(model(xs), ys)
21 loss_evaluated.backward()
22

12 frames
/usr/local/lib/python3.10/dist-packages/pennylane/operation.py in init(self, wires, id, *params)
1045
1046 elif len(self._wires) != self.num_wires:
→ 1047 raise ValueError(
1048 f"{self.name}: wrong number of wires. "
1049 f"{len(self._wires)} wires given, {self.num_wires} expected."

ValueError: RX: wrong number of wires. 2 wires given, 1 expected.

Hey @kevinkawchak,

qml.Hadamard and qml.RX only accept one wire :slight_smile:. Applying the same gate to multiple wires can be done with qml.broadcast (qml.broadcast — PennyLane 0.33.0 documentation) or with a good old fashion for loop!

Hello, the qml.Hadamard broadcast works, but qml.RY for ‘Broadcasting single gates’ or ‘Broadcasting templates’ does not for the Torch layer demo running in Colab.

n_qubits = 4
dev = qml.device("default.qubit", wires=n_qubits)

def mytemplate(weights, wires):
    qml.RY(weights, wires=range(n_qubits))

@qml.qnode(dev)
def qnode(inputs, weights):
    broadcast(unitary=mytemplate, pattern="single", wires=[0,1,2,3], parameters=weights)
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

Error:

<ipython-input-224-ab4485a0c93b>:1: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  X = torch.tensor(X, requires_grad=True).float()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-224-ab4485a0c93b> in <cell line: 12>()
     16         opt.zero_grad()
     17 
---> 18         loss_evaluated = loss_func(model(xs), ys)
     19         loss_evaluated.backward()
     20 

11 frames
/usr/local/lib/python3.10/dist-packages/pennylane/templates/broadcast.py in _preprocess(parameters, pattern, wires)
    130         num_params = PATTERN_TO_NUM_PARAMS[pattern](_wires)
    131         if shape[0] != num_params:
--> 132             raise ValueError(
    133                 f"Parameters must contain entries for {num_params} unitaries; got {shape[0]} entries"
    134             )

ValueError: Parameters must contain entries for 4 unitaries; got 6 entries

Hi @kevinkawchak ,

I see a couple of issues.

  1. On one hand RY only acts on a single wire at a time so qml.RY(weights, wires=range(n_qubits)) should actually be qml.RY(weights, wires=wire), where wire is a single number.
  2. You don’t really need to create a custom template for RY. You can just use it as your unitary! Custom templates are most useful when you want to use more than one gate in a single broadcast pattern.
  3. Your qnode doesn’t seem to be using the inputs argument.

Given these 3 points I think that what you want to do is the following.

n_qubits = 4
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(weights):
    broadcast(unitary=qml.RY, pattern="single", wires=[0,1,2,3], parameters=weights)
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

weights = [1,2,3,4];

qnode(weights)

This code works for me so hopefully it works for you too!

1 Like

Thank you, is there a way to make the weights trainable?

Yep! weights should be differentiable if its written in numpy:

n_qubits = 4
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(weights):
    qml.broadcast(unitary=qml.RY, pattern="single", wires=[0,1,2,3], parameters=weights)
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]

def cost(weights):
    outs = qnode(weights)
    return np.sum(outs)


weights = np.array([1,2,3,4]);

opt = qml.GradientDescentOptimizer(0.1)

opt.step_and_cost(cost, weights)
(tensor([1., 2., 3., 4.], requires_grad=True), -1.5194806481430605)

Hello, I can’t get either of these methods to work inside the Torch Layers demo. The original unitary/entry error was removed by using the code provided and setting n_layers=n_qubits, but now receive this error:

<ipython-input-109-d039678a60e4>:1: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  X = torch.tensor(X, requires_grad=True).float()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-109-d039678a60e4> in <cell line: 13>()
     18         opt.zero_grad()
     19 
---> 20         loss_evaluated = loss(model(xs), ys)
     21         loss_evaluated.backward()
     22 

10 frames
/usr/local/lib/python3.10/dist-packages/pennylane/templates/broadcast.py in broadcast(unitary, wires, pattern, parameters, kwargs)
    565     else:
    566         for i in range(len(wire_sequence)):
--> 567             unitary(*parameters[i], wires=wire_sequence[i], **kwargs)

TypeError: RY.__init__() got multiple values for argument 'wires'```

Hello, How many RY embedding layers and RY trainable layers is this in the Torch Layers demo?

n_qubits = 10
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(n_qubits), rotation='Y')
    qml.RandomLayers(weights, wires=range(n_qubits), ratio_imprim=0, rotations=[qml.RY], seed=42)
    return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
weights = np.array([1,2,3,4,5,6,7,8,9,10]);

n_layers = 1
weight_shapes = {"weights": (n_layers, n_qubits)}