I plan to use PennyLane and PyTorch to construct a hybrid quantum-classical neural network model (e.g., for binary classification). If the variational quantum circuit in the quantum module uses amplitude encoding, can I print the gradients of the loss function with respect to the quantum parameters? If possible, could you provide a specific mini code example? I want to investigate whether the quantum module will have a barren plateau problem.

Hi @cyx617 , welcome to the Forum!

Yes you can indeed print the gradients.

I used the full code example at the bottom of the TorchLayer documentation (Usage details section).

After `opt.step()`

I added

```
for p in model.parameters():
print('p.grad: ',p.grad)
```

This prints the gradients for the different parameters.

I’m not 100% sure that the output is what you would expect (this part is being handled by Torch) but I hope this can give you some pointers.

See the full code below!

```
import numpy as np
import pennylane as qml
import torch
import sklearn.datasets
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.Z(0)), qml.expval(qml.Z(1))]
weight_shapes = {"weights": (3, n_qubits, 3)}
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
clayer1 = torch.nn.Linear(2, 2)
clayer2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
model = torch.nn.Sequential(clayer1, qlayer, clayer2, softmax)
samples = 100
x, y = sklearn.datasets.make_moons(samples)
y_hot = np.zeros((samples, 2))
y_hot[np.arange(samples), y] = 1
X = torch.tensor(x).float()
Y = torch.tensor(y_hot).float()
opt = torch.optim.SGD(model.parameters(), lr=0.5)
loss = torch.nn.L1Loss()
# Training
epochs = 8
batch_size = 5
batches = samples // batch_size
data_loader = torch.utils.data.DataLoader(list(zip(X, Y)), batch_size=batch_size,
shuffle=True, drop_last=True)
for epoch in range(epochs):
running_loss = 0
for x, y in data_loader:
opt.zero_grad()
loss_evaluated = loss(model(x), y)
loss_evaluated.backward()
opt.step()
# Added this
for p in model.parameters():
print('p.grad: ',p.grad)
running_loss += loss_evaluated
avg_loss = running_loss / batches
print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))
```

The code you sent me runs successfully and prints the gradient of the quantum parameters. However, after modifying the code from this tutorial using the same method, I found that the printed gradient values of the quantum parameters are **None** . What could be the reason for this?

I added the following code after `optimizer.zero_grad()`

in `tutorial_quantum_transfer_learning.py`

:

```
for name, p in model_hybrid.named_parameters():
print('name:',name,'p.grad: ',p.grad)
```

I think the biggest difference between the mini code example you sent me and the `tutorial_quantum_transfer_learning.py`

code is that in `tutorial_quantum_transfer_learning.py`

, `qml.qnn.TorchLayer()`

is not used to construct the quantum layer.

Hi @cyx617 ,

I think the issue is on *where* you’re adding the print statement. The code below is part of the “training and results” section of the transfer learning demo (with the addition of the print statement).

When you run `optimizer.zero_grad()`

you’re resetting the gradient so it’s normal that you don’t get the expected output if you add the print statement there. Notice that below, within the `if`

statement, you have `optimizer.step()`

. This is where you’re actually taking the optimization step so that’s where you can add the print statement (`for p in model.parameters(): print('p.grad: ',p.grad)`

).

```
# Iterate over data.
n_batches = dataset_sizes[phase] // batch_size
it = 0
for inputs, labels in dataloaders[phase]:
since_batch = time.time()
batch_size_ = len(inputs)
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
# Track/compute gradient and make an optimization step only when training
with torch.set_grad_enabled(phase == "train"):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
if phase == "train":
loss.backward()
optimizer.step()
# Added this
for p in model.parameters():
print('p.grad: ',p.grad)
# Print iteration results
```

Let me know if this works for you!

Hi @CatalinaAlbornoz ,

Thank you very much. The modified code works well and finally prints out the gradient values!