Error when using CVNeuralNetLayers

Hello,

I am working on an experiment trying different implementations of hybrid quantum neuronal networks: I have a larger classical Neuronal network and replace the final dense layer with different quantum layers.

So far, I tried the pennylane basic- and strong entanglement layer templates as well as a self-developed circuit. All of these worked very fine and generated good results. Now tried implementations of the CVNeuralNetLayers described in Killoran et.al. 2018. ( https://pennylane.readthedocs.io/en/stable/code/api/pennylane.CVNeuralNetLayers.html )

However, using this layer I run into wired Tensorflow errors when trying to train the model:

tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a float tensor but is a double tensor [Op:Mul]

Does anyone know what this error means or at least how I can avoid this?

Best wishes,
Philipp


Here my Implementation

A) The Qnode

import pennylane as qml
from pennylane import numpy as np

def get_CVQuantumLayer(  num_inputs, num_outputs,  n_layers=1, device="strawberryfields.fock" , cutoff_dim=10):

    n_qubits = num_inputs
    if num_outputs>n_qubits:
        raise ValueError("Number of output measurements can not be larger then number of Q-Bits")

    # 0) Initialisations
    # 0.1) define device
    dev = qml.device( device , wires=n_qubits , cutoff_dim=cutoff_dim )

    # 0.2) Define weight shapes (following Killoran et.al. 2018 )
    L = n_layers
    M = n_qubits
    K = int( M/2*(M-1) )
    weight_shapes = {'theta_1':(L,K), 'phi_1':(L,K), 'varphi_1':(L,M), 'r':(L,M), 'phi_r':(L,M), 'theta_2':(L,K), 'phi_2': (L,K), 'varphi_2':(L,M), 'a':(L,M), 'phi_a':(L,M), 'k':(L,M) }

    #  1) Define QNode to build quantum layer 
    @qml.qnode(dev,interface='tf')
    def qnode(inputs, theta_1, phi_1, varphi_1, r, phi_r, theta_2, phi_2, varphi_2, a, phi_a, k):
    # 1.1) Squeezing Embedding
    qml.templates.SqueezingEmbedding(inputs, wires=range(n_qubits) )
        # 1.2) CVNN-Layers
        qml.templates.CVNeuralNetLayers(theta_1, phi_1, varphi_1, r, phi_r, theta_2, phi_2, varphi_2, a, phi_a, k,
                                        wires=[i for i in range(n_qubits)] )
        return [qml.expval(qml.NumberOperator(wires=i)) for i in range(num_outputs)]

    # 2) make QLayer
    qlayer = qml.qnn.KerasLayer(qnode, weight_shapes, output_dim=num_outputs)

    return qlayer

B) The Network (simplified version)

import tensorflow as tf

input = tf.keras.Input( shape=input_data_size )
layer_dense = tf.keras.layers.Dense(dense_layer_size)(input)

# 1.Q) Generate Quantum Layer as output
# 1.Q.0) Define parameters
num_qubits = 3  # number of Qbits used
num_qlayers= 1  # number of layers in Qlayer.

# 1.Q.1) add additional dense layer to reduce dimension to reduce number of qubits required
layer_quant_in = tf.keras.layers.Dense( num_qubits )(layer_dense)

# 1.Q.2) Load quantum layer
from QLayers import get_CVQuantumLayer 
qlayer = get_CVQuantumLayer(num_qubits, output_data_size, num_qlayers )(layer_quant_in)

# 3.2) Assemble Network
model = tf.keras.Model(inputs=input_data, outputs=qlayer)
model.compile(optimizer=tf.optimizers.Adam(), loss=tf.keras.losses.MeanSquaredError())

Hi @PhilippHS, welcome to the forum!

I’m having trouble to replicate your problem because I don’t have access to the following variables: input_data, input_data_size, dense_layer_size, and output_data_size.

In the past I’ve seen this problem happens sometimes with Keras and it can be solved by adding the following line to you code. tf.keras.backend.set_floatx('float64')

Please let me know if this works for you!

Thank you very much @CatalinaAlbornoz using

tf.keras.backend.set_floatx('float64')

indeed solved the problem :slight_smile:

Update: I accidentally found another solution to the problem. Using the device “strawberryfields.tf” (instead of “strawberryfields.fock” ) also solves the issue and seems to make the code a bit faster.

That’s great to hear @PhilippHS!

Indeed “strawberryfields.tf” uses the TensorFlow interface (see more here) so by changing it to the fock device (“strawberryfields.fock”) you no longer have the “float64” issue.