Hi all!
I want to take the gradient of a circuit that takes trainable parameters params
and sampled inputs x
. The inputs consist of batches of 2 number, say, x_1 and x_2. The batch size is 10,000, meaning x \in \mathbb{R}^{10} \times \mathbb{R}^2. The trainable parameters are p \in \mathbb{R}^3. That is, I want the gradient circuit to be vectorized so that it directly gives me a gradient over all the x I put into the network. Then, I want the optimizer to optimize the params
, the x are fixed. The goal is to minimize \nabla_{x_1} \mathrm{circuit}(\mathrm{params}, x). The latter should be computed for 10 different inputs at once, I guess the optimizer then has to optimize the mean of these outputs.
How do I do this with input batches x? The code I used is
import pennylane as qml
from pennylane import numpy as np
dev = qml.device('default.qubit', wires=3)
x = np.random.random([10,2], requires_grad=True)
params = np.random.random([3], requires_grad=True)
#@qml.batch_params(all_operations=True) # <-- did not help
@qml.qnode(dev, maxdiff=2)
def circuit(params, x):
qml.RX(x[:,0], wires=0)
qml.RY(x[:,1], wires=1)
qml.RZ(x[:,0], wires=2)
qml.broadcast(qml.CNOT, wires=[0, 1, 2], pattern="ring")
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=1)
qml.RZ(params[2], wires=2)
qml.broadcast(qml.CNOT, wires=[0, 1, 2], pattern="ring")
return qml.expval(qml.PauliZ(0))
def loss(params, x):
grad_circ = qml.grad(circuit)
u_x = grad_circ(params, x)[0]
return u_x
opt = qml.AdamOptimizer()
for i in range(4):
params = opt.step(loss, params, x)
print(f"Step {i}: cost = {loss(params, x):.2f}")
I also tried this with including np.mean()
at several places and with jax
as the interface and optimizer optax
, to no avail. The error I get is (without the stack trace):
TypeError: Grad only applies to real scalar-output functions. Try jacobian, elementwise_grad or holomorphic_grad.
jax
gives a similar error, but mentions the output shape of (10,)
.
qml.about()
gives:
Summary
Name: PennyLane
Version: 0.30.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/mielke/QPINNs/qpinns/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning
Platform info: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.28
Python version: 3.11.3
Numpy version: 1.23.5
Scipy version: 1.10.1
Installed devices:
- default.gaussian (PennyLane-0.30.0)
- default.mixed (PennyLane-0.30.0)
- default.qubit (PennyLane-0.30.0)
- default.qubit.autograd (PennyLane-0.30.0)
- default.qubit.jax (PennyLane-0.30.0)
- default.qubit.tf (PennyLane-0.30.0)
- default.qubit.torch (PennyLane-0.30.0)
- default.qutrit (PennyLane-0.30.0)
- null.qubit (PennyLane-0.30.0)
- lightning.qubit (PennyLane-Lightning-0.30.0)
Am I doing something very wrong, or is this just not possible with pennylane?
As an addon question, I need to do this with the second derivative wrt to x_1 too. Is that possible?
Thanks!