Using AmplitideEmbedding Template

Hi ,

I am trying to use amplitude embedding to encode 4 features as follows ,

from sklearn.datasets import load_iris

from sklearn.utils import shuffle

# import some data to play with

iris = datasets.load_iris()

X = iris.data[:, :]  # we only take the first two features.

Y = iris.target

trainX, testX, trainy, testy = train_test_split(X, Y, test_size=0.3, random_state=42)

trainy = tf.one_hot(trainy, depth=3)

testy = tf.one_hot(testy, depth=3)

n_qubits = 2

layers = 1

data_dimension = 3

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)

def qnode(inputs, weights):

    qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits),normalize=True)

    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))

    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

weight_shapes = {"weights": (layers,n_qubits,3)}

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(n_qubits,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

opt = tf.keras.optimizers.Adam(learning_rate=0.01)

model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=["accuracy"])

history = model.fit(trainX, trainy, validation_data=(trainX, trainy), epochs=30, batch_size=5)

I get error while model.fit(), ValueError: ‘features’ must be of shape (4,); got (2,). Use the ‘pad’ argument for automated padding.

I have four features , number of wires are 2 , so why padding is required.

If I put pad=0.

I get , AttributeError: ‘float’ object has no attribute ‘val’

Any suggestion ?

Hi @Hemant_Gahankari,

Thanks so much for your question! :slight_smile:

When defining a quantum function (a function that is passed to a QNode), non-differentiable parameters (such as inputs in this case) require a default value to be defined. This is how differentiable parameters of a quantum function are being tracked when creating a QNode.

In this specific case, the definition of the qnode function could be changed to def qnode(weights, inputs=None). This way weights is marked as differentiable and inputs as non-differentiable.

Further to this, there is an upcoming QNode using the new QuantumTape class. This is at the moment in experimental phase and compatibility with templates (such as AmplitudeEmbedding) is underway! :slight_smile:

Hope this helps!

1 Like

Hi ,

I made the changes , the code looks like below ,

from sklearn.datasets import load_iris

from sklearn.utils import shuffle

# import some data to play with

iris = datasets.load_iris()

X = iris.data[:, :]  # we only take the first two features.

Y = iris.target

trainX, testX, trainy, testy = train_test_split(X, Y, test_size=0.3, random_state=42)

trainy = tf.one_hot(trainy, depth=3)

testy = tf.one_hot(testy, depth=3)

n_qubits = 2

layers = 1

data_dimension = 3

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)

def qnode(weights, inputs=None):

    qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits),normalize=True,pad=0.)

    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))

    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

weight_shapes = {"weights": (layers,n_qubits,3)}

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(n_qubits,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

opt = tf.keras.optimizers.Adam(learning_rate=0.01)

model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])

from matplotlib import pyplot

# plot loss during training

pyplot.subplot(211)

pyplot.title('Loss')

pyplot.plot(history.history['loss'], label='train')

pyplot.plot(history.history['val_loss'], label='test')

pyplot.legend()

pyplot.show()

# plot accuracy during training

pyplot.subplot(212)

pyplot.title('Accuracy')

pyplot.plot(history.history['accuracy'], label='train')

pyplot.plot(history.history['val_accuracy'], label='test')

pyplot.legend()

I am not able to understand ,

  1. If I do not give pad=0. , I get an error , ValueError: ‘features’ must be of shape (4,); got (2,). My question is if I have 4 features, 2 qubits should be good to embed. Why is pad required.

  2. If I give pad=0. , model.fit() begins but I get tensorflow:Gradients do not exist for variables [‘dense_10/kernel:0’, ‘dense_10/bias:0’] when minimizing the loss. And I do not get good results for loss and accuracy. (This works well with angle embedding.)

Hi ,

I fixed the issue of padding , it was due to dense layer having 2 units, instead of 4. It worked with following code ,

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(4,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

Thank you folks , PennyLane rocks :slight_smile: and very happy with the swift responses from you all, I finally have two end to end classification demos working with angle and amplitude embedding.

2 Likes

I would be very happy to share the complete code if you folks want to consider putting it up on your demos page.

I think these will give very good start to many like me in getting up to speed easily with end to end classification example with TF-Keras with minimal data processing etc.

Hi @Hemant_Gahankari,

That’s really great to hear, happy that we could help! :slight_smile:

For sure! You could submit code which later appears as part of the demos page by following this link on How to submit a demo.

1 Like

Hi I am trying to add a qnode to keras actor-critic model. It works with angle embedding but with amplitude embedding facing some difficulty. Hoping to get some help from the community. Thanks

Code below

import gym
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import pennylane as qml
import keras_metrics

Configuration parameters for the whole setup

seed = 42
gamma = 0.99 # Discount factor for past rewards
max_steps_per_episode = 10000
env = gym.make(“CartPole-v0”) # Create the environment
env.seed(seed)
eps = np.finfo(np.float32).eps.item() # Smallest number such that 1.0 + eps != 1.0

n_qubits = 2
dev = qml.device(“default.qubit”, wires=n_qubits)

@qml.qnode(dev)
def qnode(weights, inputs=None):
qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits), normalize=True)
qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

n_layers = 1
weight_shapes = {“weights”: (n_layers,n_qubits,3)}
qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits)

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

num_inputs = 4
num_actions = 2
num_hidden = 4

inputs = layers.Input(shape=(num_inputs,))
input1 = tf.keras.layers.Dense(4,activation=‘relu’,input_dim=4, trainable=False)(inputs)
common = layers.Dense(num_hidden, activation=“relu”)(qlayer(input1))
action = layers.Dense(num_actions, activation=“softmax”)(common)
critic = layers.Dense(1)(common)

model = keras.Model(inputs=inputs, outputs=[action, critic])

optimizer = keras.optimizers.Adam(learning_rate=0.01)
huber_loss = keras.losses.Huber()
action_probs_history =
critic_value_history =
rewards_history =
running_reward = 0
episode_count = 0

while True: # Run until solved
state = env.reset()
episode_reward = 0
with tf.GradientTape() as tape:
for timestep in range(1, max_steps_per_episode):
# env.render(); Adding this line would show the attempts
# of the agent in a pop up window.

        state = tf.convert_to_tensor(state)
        state = tf.expand_dims(state, 0)

        # Predict action probabilities and estimated future rewards
        # from environment state
        action_probs, critic_value = model(state)
        critic_value_history.append(critic_value[0, 0])

        # Sample action from action probability distribution
        action = np.random.choice(num_actions, p=np.squeeze(action_probs))
        action_probs_history.append(tf.math.log(action_probs[0, action]))

        # Apply the sampled action in our environment
        state, reward, done, _ = env.step(action)
        rewards_history.append(reward)
        episode_reward += reward

        if done:
            break

    # Update running reward to check condition for solving
    running_reward = 0.05 * episode_reward + (1 - 0.05) * running_reward

    # Calculate expected value from rewards
    # - At each timestep what was the total reward received after that timestep
    # - Rewards in the past are discounted by multiplying them with gamma
    # - These are the labels for our critic
    returns = []
    discounted_sum = 0
    for r in rewards_history[::-1]:
        discounted_sum = r + gamma * discounted_sum
        returns.insert(0, discounted_sum)

    # Normalize
    returns = np.array(returns)
    returns = (returns - np.mean(returns)) / (np.std(returns) + eps)
    returns = returns.tolist()

    # Calculating loss values to update our network
    history = zip(action_probs_history, critic_value_history, returns)
    actor_losses = []
    critic_losses = []
    for log_prob, value, ret in history:
        # At this point in history, the critic estimated that we would get a
        # total reward = `value` in the future. We took an action with log probability
        # of `log_prob` and ended up recieving a total reward = `ret`.
        # The actor must be updated so that it predicts an action that leads to
        # high rewards (compared to critic's estimate) with high probability.
        diff = ret - value
        actor_losses.append(-log_prob * diff)  # actor loss

        # The critic must be updated so that it predicts a better estimate of
        # the future rewards.
        critic_losses.append(
            huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0))
        )

    # Backpropagation
    loss_value = sum(actor_losses) + sum(critic_losses)
    grads = tape.gradient(loss_value, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))

    # Clear the loss and reward history
    action_probs_history.clear()
    critic_value_history.clear()
    rewards_history.clear()

# Log details
episode_count += 1
if episode_count % 10 == 0:
    template = "running reward: {:.2f} at episode {}"
    print(template.format(running_reward, episode_count))

if running_reward > 195:  # Condition to consider the task solved
    print("Solved at episode {}!".format(episode_count))
    break

Error : Tensor conversion requested dtype complex128 for Tensor with dtype float32: <tf.Tensor: shape=(4,), dtype=float32, numpy=array([0. , 0.42925665, 0.8368186 , 0.33981368], dtype=float32)>

The classical version : Actor Critic Method

Thanks in Advance :slight_smile:

Actually I get the same error trying to reproduce your code

ValueError: Tensor conversion requested dtype complex128 for Tensor with dtype float32: <tf.Tensor: shape=(4,), dtype=float32, numpy=array([0. , 0.24570765, 0. , 0.96934396], dtype=float32)>

Hi @bengeof, welcome to the forum! :slight_smile:

Two small requests that will better let us help you:

  • Your issue seems to be unrelated to the original subject/content of this post. Would you be able to create a standalone post for the issue you are experiencing (type error when using KerasLayers). This will also allow other users to see/find the question and any potential answers more easily

  • I tried to look into your code, and was able to eventually reproduce the error, but there is a lot going on there (and a lot of formatting errors). This means I had to make some guesses about missing values and indentation, which I’d like to verify before digging into debugging. Would you be able to i) strip down your example to a minimal (non-)working version, and ii) please put everything in a single code-block (some of it is now in text, some in code)?

Thanks :slight_smile: