Turning quantum nodes into Torch Layers

Hey @kevinkawchak,

You can use the compute_decomposition method that each operator has:

inputs = np.random.uniform(0, np.pi, size=(n_qubits,))
print(qml.AngleEmbedding.compute_decomposition(inputs, wires=range(n_qubits), rotation=qml.RY))

n_layers = 10
weights = np.random.uniform(0, np.pi, size=(n_layers, n_qubits))

print(qml.RandomLayers.compute_decomposition(weights, wires=range(n_qubits), ratio_imprim=0, imprimitive=qml.CNOT, rotations=[qml.RY], seed=42))
[RY(tensor(1.42790738, requires_grad=True), wires=[0]), RY(tensor(2.82650401, requires_grad=True), wires=[1]), RY(tensor(1.51604305, requires_grad=True), wires=[2]), RY(tensor(2.66255654, requires_grad=True), wires=[3])]
[RY(tensor(1.1522639, requires_grad=True), wires=[2]), RY(tensor(2.30919675, requires_grad=True), wires=[1]), RY(tensor(2.99291522, requires_grad=True), wires=[0]), RY(tensor(1.87356346, requires_grad=True), wires=[0]), RY(tensor(1.49890151, requires_grad=True), wires=[2]), RY(tensor(0.42741971, requires_grad=True), wires=[3]), RY(tensor(1.76770254, requires_grad=True), wires=[2]), RY(tensor(3.06895537, requires_grad=True), wires=[1]), RY(tensor(0.6608513, requires_grad=True), wires=[1]), RY(tensor(3.00064304, requires_grad=True), wires=[3]), RY(tensor(2.6624893, requires_grad=True), wires=[0]), RY(tensor(2.98035507, requires_grad=True), wires=[2]), RY(tensor(2.35717625, requires_grad=True), wires=[1]), RY(tensor(1.89289895, requires_grad=True), wires=[2]), RY(tensor(1.71322966, requires_grad=True), wires=[0]), RY(tensor(2.89751162, requires_grad=True), wires=[3]), RY(tensor(0.75892838, requires_grad=True), wires=[3]), RY(tensor(1.13771258, requires_grad=True), wires=[0]), RY(tensor(1.7231123, requires_grad=True), wires=[2]), RY(tensor(2.94940615, requires_grad=True), wires=[0]), RY(tensor(0.252351, requires_grad=True), wires=[1]), RY(tensor(0.16210348, requires_grad=True), wires=[3]), RY(tensor(0.78967891, requires_grad=True), wires=[0]), RY(tensor(0.86227181, requires_grad=True), wires=[1]), RY(tensor(2.77279583, requires_grad=True), wires=[2]), RY(tensor(0.64170049, requires_grad=True), wires=[1]), RY(tensor(0.4653104, requires_grad=True), wires=[3]), RY(tensor(0.73046264, requires_grad=True), wires=[1]), RY(tensor(2.32315717, requires_grad=True), wires=[0]), RY(tensor(0.86651552, requires_grad=True), wires=[1]), RY(tensor(1.60561273, requires_grad=True), wires=[3]), RY(tensor(2.40816571, requires_grad=True), wires=[1]), RY(tensor(1.53504934, requires_grad=True), wires=[2]), RY(tensor(2.62760047, requires_grad=True), wires=[0]), RY(tensor(2.28099916, requires_grad=True), wires=[3]), RY(tensor(2.67712636, requires_grad=True), wires=[3]), RY(tensor(2.15453193, requires_grad=True), wires=[1]), RY(tensor(1.13060742, requires_grad=True), wires=[3]), RY(tensor(1.84518167, requires_grad=True), wires=[0]), RY(tensor(2.49053654, requires_grad=True), wires=[0])]

From there you can just count the number of RY operators :slight_smile:. There are more elegant ways of doing this, but this is short and to the point :+1:

Hello, how can I insert torch layers into this demo?:
'Transformers-Tutorials/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub

class ViTLightningModule(pl.LightningModule):
    def __init__(self, num_labels=10):
        super(ViTLightningModule, self).__init__()
        self.vit = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k',
                                                              num_labels=10,
                                                              id2label=id2label,
                                                              label2id=label2id)
        self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)

    def forward(self, pixel_values):
        outputs = self.vit(pixel_values=pixel_values)
        outputs = self.qlayer_1(outputs)
        return outputs.logits
Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224-in21k and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:pytorch_lightning.utilities.rank_zero:GPU available: False, used: False
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
INFO:pytorch_lightning.callbacks.model_summary:
  | Name     | Type                      | Params
-------------------------------------------------------
0 | vit      | ViTForImageClassification | 85.8 M
1 | qlayer_1 | TorchLayer                | 6     
-------------------------------------------------------
85.8 M    Trainable params
0         Non-trainable params
85.8 M    Total params
343.225   Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%
0/2 [00:00<?, ?it/s]
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-78-67edf51677ef> in <cell line: 15>()
     13 model = ViTLightningModule()
     14 trainer = Trainer(callbacks=[EarlyStopping(monitor='validation_loss')])
---> 15 trainer.fit(model)

18 frames
/usr/local/lib/python3.10/dist-packages/pennylane/qnn/torch.py in forward(self, inputs)
    392             tensor: output data
    393         """
--> 394         has_batch_dim = len(inputs.shape) > 1
    395 
    396         # in case the input has more than one batch dimension

AttributeError: 'ImageClassifierOutput' object has no attribute 'shape'

Hey @kevinkawchak,

So long as you’re not mixing Torch’s GPU and PL-LightningGPU then it should be fine. This works:

import pennylane as qml
from pennylane import numpy as np
import torch

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def circuit(inputs, weights):
  qml.AmplitudeEmbedding(inputs, wires=range(2), normalize=True)
  qml.RX(weights, wires=0)
  return qml.expval(qml.PauliZ(0))

weight_shapes = {"weights": 1}
torch_layer = qml.qnn.TorchLayer(circuit, weight_shapes)

clayer = torch.nn.Linear(2, 4)
model = torch.nn.Sequential(clayer, torch_layer)

x = np.random.uniform(0, 1, size=(2,))
x = torch.tensor(x).float()
model(x)
# tensor(0.3270, grad_fn=<ToCopyBackward0>)