Turning quantum nodes into Torch Layers

Hello @Tom_Bromley,

Do you have any suggestions to obtain better accuracies using qml.qnn.TorchLayers vs. original models? The code has been implemented into another classical notebook.

Thank you,

Reference:

Hey @kevinkawchak!

What do you mean by “original models”? Are you trying to compare quantum / hybrid models to purely classical ones on the same task?

Yes, this is the case.

Thanks for clarifying! This is a question we get quite often, and unfortunately the answer isn’t very satisfying :sweat_smile:. There are many things that go into making a machine learning model work well for a specific task, including the choice of optimizer, the choice of hyperparameters — learning rate, step size, batch size, etc — the cost function, the model architecture itself, and many more. It’s a tedious task to tweak all of those things and find the right combination.

I also think it’s not good to assume that quantum / hybrid models should train better (better. accuracy, for instance) than a classical model. The question is much more nuanced there. Maria Schuld gave a nice talk on this topic recently, you should check it out!

Hello @isaacdevlugt,

I appreciate the resources. The issue appears to be with implementing the def forward() step from “Creating non-sequential models” into other notebooks. For instance, the demo make_moons dataset for the first qlayer uses: x_1 = self.qlayer_1(x_1)

A hybrid model I am working on returns parameters in the run summary as “qlayer_1 | TorchLayer | 12” using self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)

I would need an analogous solution to “x_1” for the hybrid notebook for def forward(self, pixel_values) in order to fully incorporate the quantum circuit. Thank you.

References:

‘https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb’

Can you reply back with some code that summarizes the issue you’re facing? That would help me understand the problem better :slight_smile:

Hello @isaacdevlugt,

class ViTLightningModule(pl.LightningModule):

def __init__(self, num_labels=10):
    super(ViTLightningModule, self).__init__()
    self.vit = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k',
                                                          num_labels=10,
                                                          id2label=id2label,
                                                          label2id=label2id)
    self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224-in21k and are newly initialized: [‘classifier.weight’, ‘classifier.bias’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:pytorch_lightning.utilities.rank_zero:GPU available: True (cuda), used: True
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
INFO:pytorch_lightning.accelerators.cuda:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:pytorch_lightning.callbacks.model_summary:
| Name | Type | Params

0 | vit | ViTForImageClassification | 85.8 M
1 | qlayer_1 | TorchLayer | 16

85.8 M Trainable params
0 Non-trainable params
85.8 M Total params
343.225 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%
0/2 [00:00<?, ?it/s]

TypeError Traceback (most recent call last)
in <cell line: 15>()
13 model = ViTLightningModule()
14 trainer = Trainer(accelerator=‘gpu’, max_epochs=5) #, callbacks=[EarlyStopping(monitor=‘validation_loss’)])
—> 15 trainer.fit(model)

15 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = ,

TypeError: TorchLayer.forward() missing 1 required positional argument: ‘inputs’

Thanks @kevinkawchak! I still don’t quite understand what the issue you’re facing is. I can’t run your code either since there are dependencies missing. However, your forward pass does look problematic:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

qlayer_1 probably needs some inputs. You are also overriding what outputs is after you calculate self.vit(pixel_values=pixel_values).

If this isn’t the issue, having a complete code example that I can run would help :slight_smile:

Implementing code from Turning quantum nodes into Torch Layers | PennyLane Demos

If you’re just trying to inject some “quantumness” into the code provided in that GitHub repository, then definitely there’s something wrong with your forward pass:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1()
    return outputs.logits

qlayer_1 probably needs inputs, and you are also overriding what outputs is after you calculate self.vit(pixel_values=pixel_values). If qlayer_1 comes after vit, then you’d need to do something like this:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1(outputs)
    return outputs.logits

The output of vit would then be the input to qlayer_1. Also, I’m not sure that outputs with have a logits attribute, so that might have to be changed as well. Let me know if that helps!

1 Like

Thank you,

Is this what you are suggesting?

class ViTLightningModule(pl.LightningModule):
def init(inputs, self, num_labels=10):
super(ViTLightningModule, self).init()
self.vit = ViTForImageClassification.from_pretrained(‘google/vit-base-patch16-224-in21k’,
num_labels=10,
id2label=id2label,
label2id=label2id)
self.qlayer_1 = qml.qnn.TorchLayer(inputs, qnode, weight_shapes)

Reference:
'https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb

Not quite — I think the key might be to just modify your forward function to this :slight_smile:

def forward(self, pixel_values):
    outputs = self.vit(pixel_values=pixel_values)
    outputs = self.qlayer_1(outputs)
    return outputs.logits