Calculate gradients using qml.AmplitudeEmbedding

Hi!

I want to use qml.AmplitudeEmbedding. However, I need to calculate gradients for gradient descent. According to qml.AmplitudeEmbedding — PennyLane 0.31.0 documentation, this is not possible. Is this correct? Is there a way to do it anyway? When will it be possible?

Hi @eisenmsi, thanks for your question! :slight_smile:
What is the exact problem you’re trying to solve?

If you’re trying to calculate a gradient with respect to the feature vector used within a qml.AmplitudeEmbedding call, unfortunately that won’t be possible. The processing that happens to get you the state you’re creating there is very much non-trivial and, in general, not differentiable.

However, depending on what exactly you’re trying to do, maybe you can find a workaround. For example, maybe there’s some other approach you can use to prepare your desired state?

But if you’re trying to calculate a gradient with respect to some other parameter in the circuit (not the feature vector here), that should not be a problem. :slight_smile:

2 Likes

Hi @eisenmsi ,

Great question! Notice that Amplitude Embedding is designed to embed non-trainable features into your circuit. From what I understand you want to create an optimization routine where you train over a set of parameters. Below you will see an example of how you can do this. You will notice that I use AmplitudeEmbedding for embedding my features and I use BasicEntanglerLayers to add my parameters.

# Import your favourite libraries
import pennylane as qml
from pennylane import numpy as np

# Define the number of wires and the device
n_wires = 2
dev = qml.device('lightning.qubit',wires=n_wires)

# Create a qnode
@qml.qnode(dev)
def circuit(features,parameters):
  # Embed your features
  qml.AmplitudeEmbedding(features,range(n_wires), normalize=True)
  # Create an ansatz where you will add your trainable parameters
  qml.BasicEntanglerLayers(weights=parameters, wires=range(n_wires))
  # Return a measurement
  return qml.expval(qml.PauliZ(0))

# Define your features
features = np.array([1.,2.,3.,4.],requires_grad=False)
# Define your initial parameters and set them as trainable
parameters = np.array([[1.,np.pi],[0.,1.]],requires_grad=True)

# Draw your circuit
qml.draw_mpl(circuit)(features,parameters);

# Create a cost function
def cost(features,parameters):
    return (circuit(features,parameters))**2

# Define an optimizer
opt = qml.GradientDescentOptimizer(stepsize=0.1)

# Iterate, updating your parameters after every iteration
for it in range(15):
  features_and_parameters = opt.step(cost,features,parameters)
  parameters = features_and_parameters[1]
  print('Cost: ', cost(features,parameters))

# Notice how the cost converges

Please let me know if this is helpful or if you were looking for something different!

1 Like

Thanks for the answer! Yes, I would like to calculate the gradient with respect to the features. Is there another way to efficiently encode the features into a quantum circuit that remains differentiable?

Thank you! This is very helpful! So I can use qml.AmplitudeEmbedding in the context of gradient descent as long as I don’t have the features as trainable parameters!?

That’s right! :smile: There is a trick that allows you to do this but it could be a bit more complicated. qml.MottotenStatePreparation will give you an equivalent circuit that generates the state you want with rotations and CNOTs. You would have to build that circuit and use the parameters of the rotations you have obtained with the template.

On the other hand, if you are using a quantum computer, note that the gradient is not a unique property of an initial state. You can find several circuits that generate the same state but have a different gradient or even different number of parameters.

1 Like