Hello everyone!
I’m trying out the following model, and specified diff_method = "parameter-shift"
. I was going through some of the discussions, and found one that was talking about how you can print out the differentiation method being used under the hood when you don’t specify one. Following that, I tried out circuit.best_method_str(dev, circuit.interface)
just to see what it gives (which I had assumed would give param-shift
since I had specified the diff_method already). But, I got backprop
.
So, does that mean that circuit.best_method_str(dev, circuit.interface)
just returns the ‘best’ differentiation method, which just happens to be backprop in this case, but not the differentiation method that is being used under the hood? Or, does it mean that backprop was used instead of parameter shift under the hood?
If it is the former, is there any other way to figure out what differentiation method is being used under the hood? Or in case of the latter, is there a way to fix the diff_method to only take parameter_shift?
Here’s the model I was running for context :
import pennylane as qml
from pennylane import numpy as np
#Setting up qubit
dev = qml.device('default.qubit', wires = 2)
#Setting up quantum circuit
@qml.qnode(dev)
def circuit(params, diff_method = "parameter-shift"):
qml.RX(params[0], wires = 0)
qml.RY(params[1], wires = 1)
qml.CNOT(wires = [0, 1])
return qml.probs(wires = (0, 1))
#Cost function
def cost_fn(params):
probs = circuit(params)
return np.mean(np.abs(probs - target_probs))
#Setting up optimizer to Gradient Descent
opt = qml.GradientDescentOptimizer(0.3)
#Target Probability distribution
target_probs = [0.2, 0.4, 0.2, 0.2]
#Parameters for the quantum circuit
params = np.array([np.pi/4, np.pi/4], requires_grad=True)
#No. of steps
steps = 500
#Running the model
for _ in range(steps):
params, l = opt.step_and_cost(cost_fn, params)
print(circuit.best_method_str(dev, circuit.interface))
The output of print(circuit.best_method_str(dev, circuit.interface))
is :
backprop
Here’s the output of qml.about()
:
Name: PennyLane
Version: 0.37.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning
Platform info: Linux-6.1.85+-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.26.4
Scipy version: 1.13.1
Installed devices:
- default.clifford (PennyLane-0.37.0)
- default.gaussian (PennyLane-0.37.0)
- default.mixed (PennyLane-0.37.0)
- default.qubit (PennyLane-0.37.0)
- default.qubit.autograd (PennyLane-0.37.0)
- default.qubit.jax (PennyLane-0.37.0)
- default.qubit.legacy (PennyLane-0.37.0)
- default.qubit.tf (PennyLane-0.37.0)
- default.qubit.torch (PennyLane-0.37.0)
- default.qutrit (PennyLane-0.37.0)
- default.qutrit.mixed (PennyLane-0.37.0)
- default.tensor (PennyLane-0.37.0)
- null.qubit (PennyLane-0.37.0)
- lightning.qubit (PennyLane_Lightning-0.37.0)
I’d appreciate any insight! Thank you so much for reading and the help!