Different interfaces, different performances

There is an example presented by PennyLane for qml.qnn.TorchLayer which I copy below. If one modifies only the interface line from @qml.qnode(dev) to @qml.qnode(dev, interface='torch') the convergence behavior drastically changes:

for the case @qml.qnode(dev):

Average loss over epoch 10: 0.1589
Average loss over epoch 20: 0.1331
Average loss over epoch 30: 0.1321

and for the case @qml.qnode(dev, interface='torch'):

Average loss over epoch 100: 0.1709
Average loss over epoch 200: 0.1593
Average loss over epoch 300: 0.1538
Average loss over epoch 400: 0.1505
Average loss over epoch 500: 0.1482
Average loss over epoch 600: 0.1462
Average loss over epoch 700: 0.1448
Average loss over epoch 800: 0.1434
Average loss over epoch 900: 0.1426
Average loss over epoch 1000: 0.1415

I am wondering what is the reason behind this and further how do we know in general which interface is the most suitable for the problem?

code:

import numpy as np
import pennylane as qml
import torch
import sklearn.datasets

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev, interface='torch') # the default was @qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

weight_shapes = {"weights": (3, n_qubits, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)

clayer1 = torch.nn.Linear(2, 2)
clayer2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax)

samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1

X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()

opt = torch.optim.SGD(model.parameters(), lr=0.5)
loss = torch.nn.L1Loss()



epochs = 1000
batch_size = 5
batches = samples // batch_size

data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size,
                                          shuffle=True, drop_last=True)

for epoch in range(epochs):

    running_loss = 0

    for x, y in data_loader:
        opt.zero_grad()
        loss_evaluated = loss(model(x), y)
        loss_evaluated.backward()
        opt.step()
        running_loss += loss_evaluated

    avg_loss = running_loss / batches
    if (epoch+1) % 100 ==0:
        print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))

Hi @mamadpierre!

Currently, when using the qml.qnn module, it is best to always create a bare QNode with no interface, and use this to initialize the KerasLayer. That is, simply @qml.qnode(dev).

The behaviour you found (where training differs if interface= is passed to the decorator) is a known bug β€” this has been fixed in master, and will be making it into the next PennyLane release!

1 Like

Hi @josh
Thanks for your prompt response. I understood. Now that you answered my question thoroughly and I already shared the above code, I ask you another one.
How to use the above code with device = "cuda"?
Please note that:

  1. I add device = "cuda" and modify below two lines:

x, y = x.to(device), y.to(device)

model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax).to(device)

But I receive an error.

  1. If I omit the qlayer from nn.Sequential, the classical code works fine with cuda.

Error:

    line 68, in <module>
    loss_evaluated.backward()
    line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
    line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Expected object of device type cuda but got device type cpu for argument 2 'mat2' in call to _th_mm

Thanks for reporting this @mamadpierre! I will look into this and get back to you, it looks like the current iteration of TorchLayer does not support placing models on the GPU.

For now, I recommend using the CPU with the TorchLayer.

1 Like

Thanks for the time and consideration @josh. My guess is the weights and inputs structures are by default not suitable for Cuda which maybe can be modified?

The error has been reported almost at the same time here as well.

Hi @mamadpierre!

Thanks for checking this out! We had a quick look and managed to get the following to work without error:

import numpy as np
import pennylane as qml
import torch
import sklearn.datasets
​
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)
​
@qml.qnode(dev) # the default was @qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
​
weight_shapes = {"weights": (3, n_qubits, 3)}
​
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
​
clayer1 = torch.nn.Linear(2, 2)
clayer2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
​
​
device = "cuda"
model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax).to(device)
# model = torch.nn.Sequential(clayer1, clayer2, softmax).to(device)
​
samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1
​
X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()
X, Y = X.to(device), Y.to(device)
​
model(X[:10])

That being said, TorchLayer is fairly new so it’s possible that it doesn’t work so well on GPU. For now, it might be best to stick with the CPU version as @josh mentioned.

Hi @Tom_Bromley Thanks for the time and quick reply. I copy pasted the provided code and I receive the following error:

Traceback (most recent call last):
  File "torchQuantumLayer.py", line 70, in <module>
    loss_evaluated = loss(model(x), y)
  File "anaconda3/envs/MnistPytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "anaconda3/envs/MnistPytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "anaconda3/envs/MnistPytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "anaconda3/envs/MnistPytorch/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
    return F.linear(input, self.weight, self.bias)
  File "anaconda3/envs/MnistPytorch/lib/python3.7/site-packages/torch/nn/functional.py", line 1610, in linear
    ret = torch.addmm(bias, input, weight.t())
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm

Versions:

Python 3.7.7 (default, May  7 2020, 21:25:33) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
import pennylane
import torch
print(torch.__version__)
1.5.1
print(pennylane.__version__)
0.10.0

Hi @mamadpierre,

Thanks for also trying that code! Weird that it behaves differently for both of us. I ended up using an earlier version of Torch as the GPU drivers on my device needed updating, perhaps it was that :thinking: For now we’ll just have to recommend running on CPU, but this has definitely put GPU support on our radar, thanks!

1 Like