Using AmplitideEmbedding Template

Hi ,

I am trying to use amplitude embedding to encode 4 features as follows ,

from sklearn.datasets import load_iris

from sklearn.utils import shuffle

# import some data to play with

iris = datasets.load_iris()

X = iris.data[:, :]  # we only take the first two features.

Y = iris.target

trainX, testX, trainy, testy = train_test_split(X, Y, test_size=0.3, random_state=42)

trainy = tf.one_hot(trainy, depth=3)

testy = tf.one_hot(testy, depth=3)

n_qubits = 2

layers = 1

data_dimension = 3

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)

def qnode(inputs, weights):

    qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits),normalize=True)

    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))

    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

weight_shapes = {"weights": (layers,n_qubits,3)}

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(n_qubits,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

opt = tf.keras.optimizers.Adam(learning_rate=0.01)

model.compile(loss='categorical_crossentropy', optimizer=opt,metrics=["accuracy"])

history = model.fit(trainX, trainy, validation_data=(trainX, trainy), epochs=30, batch_size=5)

I get error while model.fit(), ValueError: ‘features’ must be of shape (4,); got (2,). Use the ‘pad’ argument for automated padding.

I have four features , number of wires are 2 , so why padding is required.

If I put pad=0.

I get , AttributeError: ‘float’ object has no attribute ‘val’

Any suggestion ?

Hi @Hemant_Gahankari,

Thanks so much for your question! :slight_smile:

When defining a quantum function (a function that is passed to a QNode), non-differentiable parameters (such as inputs in this case) require a default value to be defined. This is how differentiable parameters of a quantum function are being tracked when creating a QNode.

In this specific case, the definition of the qnode function could be changed to def qnode(weights, inputs=None). This way weights is marked as differentiable and inputs as non-differentiable.

Further to this, there is an upcoming QNode using the new QuantumTape class. This is at the moment in experimental phase and compatibility with templates (such as AmplitudeEmbedding) is underway! :slight_smile:

Hope this helps!

1 Like

Hi ,

I made the changes , the code looks like below ,

from sklearn.datasets import load_iris

from sklearn.utils import shuffle

# import some data to play with

iris = datasets.load_iris()

X = iris.data[:, :]  # we only take the first two features.

Y = iris.target

trainX, testX, trainy, testy = train_test_split(X, Y, test_size=0.3, random_state=42)

trainy = tf.one_hot(trainy, depth=3)

testy = tf.one_hot(testy, depth=3)

n_qubits = 2

layers = 1

data_dimension = 3

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)

def qnode(weights, inputs=None):

    qml.templates.AmplitudeEmbedding(features=inputs, wires=range(n_qubits),normalize=True,pad=0.)

    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))

    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

weight_shapes = {"weights": (layers,n_qubits,3)}

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(n_qubits,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

opt = tf.keras.optimizers.Adam(learning_rate=0.01)

model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])

from matplotlib import pyplot

# plot loss during training

pyplot.subplot(211)

pyplot.title('Loss')

pyplot.plot(history.history['loss'], label='train')

pyplot.plot(history.history['val_loss'], label='test')

pyplot.legend()

pyplot.show()

# plot accuracy during training

pyplot.subplot(212)

pyplot.title('Accuracy')

pyplot.plot(history.history['accuracy'], label='train')

pyplot.plot(history.history['val_accuracy'], label='test')

pyplot.legend()

I am not able to understand ,

  1. If I do not give pad=0. , I get an error , ValueError: ‘features’ must be of shape (4,); got (2,). My question is if I have 4 features, 2 qubits should be good to embed. Why is pad required.

  2. If I give pad=0. , model.fit() begins but I get tensorflow:Gradients do not exist for variables [‘dense_10/kernel:0’, ‘dense_10/bias:0’] when minimizing the loss. And I do not get good results for loss and accuracy. (This works well with angle embedding.)

Hi ,

I fixed the issue of padding , it was due to dense layer having 2 units, instead of 4. It worked with following code ,

model = tf.keras.models.Sequential()

model.add(tf.keras.layers.Dense(4,activation='relu',input_dim=4))

model.add(qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=n_qubits))

model.add(tf.keras.layers.Dense(data_dimension, activation='softmax'))

Thank you folks , PennyLane rocks :slight_smile: and very happy with the swift responses from you all, I finally have two end to end classification demos working with angle and amplitude embedding.

2 Likes

I would be very happy to share the complete code if you folks want to consider putting it up on your demos page.

I think these will give very good start to many like me in getting up to speed easily with end to end classification example with TF-Keras with minimal data processing etc.

Hi @Hemant_Gahankari,

That’s really great to hear, happy that we could help! :slight_smile:

For sure! You could submit code which later appears as part of the demos page by following this link on How to submit a demo.

1 Like