TTN with Qnode using Keras layer

I am using qml.ttn with keras layer. But it throws an error TypeError: QNode must include an argument with name inputs for inputting data. It might be due to weights shape.

def block(weights, wires):
qml.CNOT(wires=[wires[0],wires[1]])
qml.RY(weights[0], wires=wires[0])
qml.RY(weights[1], wires=wires[1])

n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.TTN.get_n_blocks(range(n_wires),n_block_wires)

dev = qml.device(“default.qubit.tf”, wires=4)
@qml.qnode(dev, interface=“tf”, diff_method=“backprop”)

def circuit(x, weights):
qml.AngleEmbedding(x, wires=range(4))

for w in weights:

    qml.TTN(range(n_wires),n_block_wires,block, n_params_block, w)
  
return qml.expval(qml.PauliZ(3))

weights_shape={“weights”: (3,2)}

input_m = tf.keras.layers.Input(shape=(4,))
keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=1, name = “keras_1”)(input_m)
output = tf.keras.layers.Dense(1, activation=‘softmax’, name = “dense_1”)(keras_1)

Model creation

model = tf.keras.Model(inputs=input_m, outputs=output)

Hi @Amandeep!

This is actually not about the weights’ shape. What this error means is that your x argument in your circuit should actually be called inputs. So if you change the following 2 lines in your code it should run properly.

def circuit(inputs, weights):
  qml.AngleEmbedding(inputs, wires=range(4))

Please let me know if this works for you!

@CatalinaAlbornoz

Thank you for your prompt response. It worked fine. But, when i compiled the above code after making changes (as suggested), it throws now error of a shape

class BinaryTruePositives(tf.keras.metrics.Metric):

def __init__(self, name='Results', **kwargs):
    super(BinaryTruePositives, self).__init__(name=name, **kwargs)
   
    self.true_positives = self.add_weight(name='tp', initializer='zeros')

def update_state(self, y_true, y_pred, sample_weight=None):
    
   
    y_true=tf.cast(y_true, dtype=tf.float32)
    y_pred=tf.cast(y_pred, dtype=tf.float32)
    
    y_true = tf.squeeze(y_true)
    
    y_pred = tf.map_fn(lambda x: 1.0 if x >= 0.0 else -1.0, y_pred)
    
    z=tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
    
   
    self.true_positives.assign_add(z)



def result(self):
    return self.true_positives

def reset_states(self):
  
    self.true_positives.assign(0.)

input_m = tf.keras.layers.Input(shape=(4,))
keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=1, name = “keras_1”)(input_m)
output = tf.keras.layers.Dense(1, activation=‘softmax’, name = “dense_1”)(keras_1)

Model creation

model = tf.keras.Model(inputs=input_m, outputs=output)

Model compilation

model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
metrics=[BinaryTruePositives()])

history=model.fit(x_train, y_train, epochs=10, batch_size=3)

ValueError: Exception encountered when calling layer “keras_1” (type KerasLayer).

Weights tensor must have first dimension of length 3; got 2

Call arguments received:
• inputs=tf.Tensor(shape=(3, 4), dtype=float32)

Hi @Amandeep,

Thank you for sharing your code. I’m taking a look at it and will be back soon with an answer.

Hi @Amandeep,

As the error mentions it seems that your weights don’t have the right dimensions. Unfortunately your code is not very clear to me so I cannot find any specific suggestions on how to solve this issue.

If you provide a minimal working (or non-working) example I can try to help you more. A minimal example will be self-contained so that I can try to reproduce the problem, but it will be a simplified version of your code, so it should not include anything that doesn’t strictly need to be there.

The process of creating the minimal non-working example can also be very beneficial to you in finding the solution yourself :slight_smile:

Hi @CatalinaAlbornoz ,

Thank you for your response. Yes, the issue is with the shape of weights.
Code is

mnist = fetch_openml(‘mnist_784’, version=1, cache=True)
data = mnist[‘data’]
labels = np.array(mnist[‘target’], dtype=np.int8)

labels_zero = labels[labels==0] + 1
labels_one = labels[labels==1] - 2
binary_labels = np.hstack((labels_zero, labels_one))
digits_zero = data[labels==0]
digits_one = data[labels==1]
data = np.vstack((digits_zero, digits_one))
data=data.reshape(data.shape[0], 28,28,1)
data=tf.image.resize(data[:], (2,2)).numpy()
data=data.reshape(data.shape[0], 4)
sc = StandardScaler()
data = sc.fit_transform(data)

data = (data-np.min(data))/(np.max(data)-np.min(data))
data = np.mod(data, np.pi*0.5)
data.shape

x_train, x_test, y_train, y_test = train_test_split(data, binary_labels, test_size=0.2)

def block(weights, wires):
qml.CNOT(wires=[wires[0],wires[1]])
qml.RY(weights[0], wires=wires[0])
qml.RY(weights[1], wires=wires[1])

n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.TTN.get_n_blocks(range(n_wires),n_block_wires)
n_blocks

dev = qml.device(“default.qubit.tf”, wires=4)
@qml.qnode(dev, interface=“tf”, diff_method=“backprop”)

def circuit(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(4))

for w in weights:

    qml.TTN(range(n_wires),n_block_wires,block, n_params_block, w)
  
return qml.expval(qml.PauliZ(3))

weights_shape={“weights”: (3,2)}

input_m = tf.keras.layers.Input(shape=(4,))
keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=1, name = “keras_1”)(input_m)
output = tf.keras.layers.Dense(1, activation=‘softmax’, name = “dense_1”)(keras_1)

Model creation

model = tf.keras.Model(inputs=input_m, outputs=output)

Model compilation

model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
metrics=[BinaryTruePositives()])

history=model.fit(x_train, y_train, epochs=10, batch_size=3)

The idea is to combine tf layer at the end of TTN.

Hey @Amandeep! There’s quite a bit missing from your code (e.g., imports and functions you’ve defined) in order for us to replicate your issue. Are you able to attach a more minimal example that reproduces the error?

Hi @isaacdevlugt @CatalinaAlbornoz
code:

import numpy as np
import tensorflow as tf
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

import pennylane as qml
from pennylane import numpy as np

from pennylane.templates.state_preparations import MottonenStatePreparation
from pennylane.templates.layers import StronglyEntanglingLayers

class BinaryTruePositives(tf.keras.metrics.Metric):

def __init__(self, name='Results', **kwargs):
    super(BinaryTruePositives, self).__init__(name=name, **kwargs)
   
    self.true_positives = self.add_weight(name='tp', initializer='zeros')

def update_state(self, y_true, y_pred, sample_weight=None):
    
   
    y_true=tf.cast(y_true, dtype=tf.float32)
    y_pred=tf.cast(y_pred, dtype=tf.float32)
    
    y_true = tf.squeeze(y_true)
    
    y_pred = tf.map_fn(lambda x: 1.0 if x >= 0.0 else -1.0, y_pred)
    
    z=tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))
    
   
    self.true_positives.assign_add(z)



def result(self):
    return self.true_positives

def reset_states(self):
  
    self.true_positives.assign(0.)

Rest of the code is same.
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

mnist = fetch_openml(‘mnist_784’, version=1, cache=True)
data = mnist[‘data’]
labels = np.array(mnist[‘target’], dtype=np.int8)

labels_zero = labels[labels==0] + 1
labels_one = labels[labels==1] - 2
binary_labels = np.hstack((labels_zero, labels_one))
digits_zero = data[labels==0]
digits_one = data[labels==1]
data = np.vstack((digits_zero, digits_one))
data=data.reshape(data.shape[0], 28,28,1)
data=tf.image.resize(data[:], (2,2)).numpy()
data=data.reshape(data.shape[0], 4)
sc = StandardScaler()
data = sc.fit_transform(data)

data = (data-np.min(data))/(np.max(data)-np.min(data))
data = np.mod(data, np.pi*0.5)
data.shape

Hmmm, I’m not able to reproduce your error. Here is the code I’m running just to be sure:

import numpy as np
import tensorflow as tf
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

import pennylane as qml
from pennylane import numpy as np


class BinaryTruePositives(tf.keras.metrics.Metric):
    def __init__(self, name="Results", **kwargs):
        super(BinaryTruePositives, self).__init__(name=name, **kwargs)

        self.true_positives = self.add_weight(name="tp", initializer="zeros")

    def update_state(self, y_true, y_pred, sample_weight=None):

        y_true = tf.cast(y_true, dtype=tf.float32)
        y_pred = tf.cast(y_pred, dtype=tf.float32)

        y_true = tf.squeeze(y_true)

        y_pred = tf.map_fn(lambda x: 1.0 if x >= 0.0 else -1.0, y_pred)

        z = tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))

        self.true_positives.assign_add(z)

    def result(self):
        return self.true_positives

    def reset_states(self):
        self.true_positives.assign(0.0)


from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

mnist = fetch_openml("mnist_784", version=1, cache=True)
data = mnist["data"]
labels = np.array(mnist["target"], dtype=np.int8)

labels_zero = labels[labels == 0] + 1
labels_one = labels[labels == 1] - 2
binary_labels = np.hstack((labels_zero, labels_one))
digits_zero = data[labels == 0]
digits_one = data[labels == 1]
data = np.vstack((digits_zero, digits_one))
data = data.reshape(data.shape[0], 28, 28, 1)
data = tf.image.resize(data[:], (2, 2)).numpy()
data = data.reshape(data.shape[0], 4)
sc = StandardScaler()
data = sc.fit_transform(data)

data = (data - np.min(data)) / (np.max(data) - np.min(data))
data = np.mod(data, np.pi * 0.5)
data.shape

x_train, x_test, y_train, y_test = train_test_split(data, binary_labels, test_size=0.2)


def block(weights, wires):
    qml.CNOT(wires=[wires[0], wires[1]])
    qml.RY(weights[0], wires=wires[0])
    qml.RY(weights[1], wires=wires[1])


n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.TTN.get_n_blocks(range(n_wires), n_block_wires)

dev = qml.device("default.qubit.tf", wires=4)


@qml.qnode(dev, interface="tf", diff_method="backprop")
def circuit(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(4))

    for w in weights:

        qml.TTN(range(n_wires), n_block_wires, block, n_params_block, w)

    return qml.expval(qml.PauliZ(3))


weights_shape = {"weights": (3, 2)}

input_m = tf.keras.layers.Input(shape=(4,))
keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=1, name="keras_1")(
    input_m
)
output = tf.keras.layers.Dense(4, activation="softmax", name="dense_1")(input_m)

model = tf.keras.Model(inputs=input_m, outputs=output)

model.compile(
    loss=tf.keras.losses.MeanSquaredError(),
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
)

history = model.fit(x_train, y_train, epochs=10, batch_size=3)

'''output:
Epoch 1/10
3942/3942 [==============================] - 2s 546us/step - loss: 1.0963
Epoch 2/10
3942/3942 [==============================] - 2s 550us/step - loss: 1.0963
Epoch 3/10
3942/3942 [==============================] - 2s 566us/step - loss: 1.0963
Epoch 4/10
3942/3942 [==============================] - 2s 551us/step - loss: 1.0963
Epoch 5/10
3942/3942 [==============================] - 2s 548us/step - loss: 1.0963
Epoch 6/10
3942/3942 [==============================] - 2s 549us/step - loss: 1.0963
Epoch 7/10
3942/3942 [==============================] - 2s 551us/step - loss: 1.0963
Epoch 8/10
3942/3942 [==============================] - 2s 552us/step - loss: 1.0963
Epoch 9/10
3942/3942 [==============================] - 2s 560us/step - loss: 1.0963
Epoch 10/10
3942/3942 [==============================] - 2s 616us/step - loss: 1.0963
'''

Maybe there is an issue with your version of PennyLane, tensorflow, etc. Are you using the most up-to-date versions of everything?

@isaacdevlugt can you check

input_m = tf.keras.layers.Input(shape=(4,))
keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=1, name = “keras_1”)(input_m)
output = tf.keras.layers.Dense(1, activation=‘softmax’, name = “dense_1”)(keras_1)

It did not work. In your code, kera_1 layer is missed in execution.

Woops! You’re right. One thing that was incorrectly defined was the shape of your weights. See template_weights in the code below.

After changing the weight dimensions, there is still an error that has something to do with the fact that the output dimension of your quantum layer is only 1. If I change it to output a 2-dimensional array and change the input dimension of the output layer accordingly, then everything seems to work. I also used Sequential instead of Model to create the entire hybrid model for clarity.

import numpy as np
import tensorflow as tf
from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

import pennylane as qml
from pennylane import numpy as np


class BinaryTruePositives(tf.keras.metrics.Metric):
    def __init__(self, name="Results", **kwargs):
        super(BinaryTruePositives, self).__init__(name=name, **kwargs)

        self.true_positives = self.add_weight(name="tp", initializer="zeros")

    def update_state(self, y_true, y_pred, sample_weight=None):

        y_true = tf.cast(y_true, dtype=tf.float32)
        y_pred = tf.cast(y_pred, dtype=tf.float32)

        y_true = tf.squeeze(y_true)

        y_pred = tf.map_fn(lambda x: 1.0 if x >= 0.0 else -1.0, y_pred)

        z = tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))

        self.true_positives.assign_add(z)

    def result(self):
        return self.true_positives

    def reset_states(self):
        self.true_positives.assign(0.0)


from sklearn.datasets import fetch_openml
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

mnist = fetch_openml("mnist_784", version=1, cache=True)
data = mnist["data"]
labels = np.array(mnist["target"], dtype=np.int8)

labels_zero = labels[labels == 0] + 1
labels_one = labels[labels == 1] - 2
binary_labels = np.hstack((labels_zero, labels_one))
digits_zero = data[labels == 0]
digits_one = data[labels == 1]
data = np.vstack((digits_zero, digits_one))
data = data.reshape(data.shape[0], 28, 28, 1)
data = tf.image.resize(data[:], (2, 2)).numpy()
data = data.reshape(data.shape[0], 4)
sc = StandardScaler()
data = sc.fit_transform(data)

data = (data - np.min(data)) / (np.max(data) - np.min(data))
data = np.mod(data, np.pi * 0.5)
data.shape

x_train, x_test, y_train, y_test = train_test_split(data, binary_labels, test_size=0.2)

print(x_train.shape)
print(x_train[0])

def block(weights, wires):
    qml.CNOT(wires=[wires[0], wires[1]])
    qml.RY(weights[0], wires=wires[0])
    qml.RY(weights[1], wires=wires[1])


n_wires = 4
n_block_wires = 2
n_params_block = 2
n_blocks = qml.TTN.get_n_blocks(range(n_wires), n_block_wires)

dev = qml.device("default.qubit.tf", wires=n_wires)

@qml.qnode(dev, interface="tf", diff_method="backprop")
def circuit(inputs, weights):
    qml.AngleEmbedding(inputs, wires=range(4))
    qml.TTN(range(n_wires), n_block_wires, block, n_params_block, weights)
    return [qml.expval(qml.PauliZ(3)), qml.expval(qml.PauliZ(2))]

template_weights = np.random.uniform(size=(n_blocks, 2))
weights_shape = {"weights": template_weights.shape}

keras_1 = qml.qnn.KerasLayer(circuit, weights_shape, output_dim=2, name="keras_1")
output = tf.keras.layers.Dense(2, activation="softmax", name="dense_1")

model = tf.keras.models.Sequential([keras_1, output])

qlayer_out = circuit(x_train[0], template_weights)
print(circuit(x_train[0], template_weights))

model.compile(
    loss=tf.keras.losses.MeanSquaredError(),
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
)

history = model.fit(x_train, y_train, epochs=2, batch_size=2)