WARNING:tensorflow:You are casting an input of type complex128 to an incompatible dtype float32. This will discard the imaginary part and may not be what you intended

Hello! If applicable, put your complete code example down below. Make sure that your code:

  • is 100% self-contained — someone can copy-paste exactly what is here and run it to
    reproduce the behaviour you are observing
  • includes comments

If you want help with diagnosing an error, please put the full error message below:

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt


# Define the neural network model
class PINN(tf.keras.Model):
    def __init__(self):
        super(PINN, self).__init__()
        self.dense1 = tf.keras.layers.Dense(20, activation='tanh')
        self.dense2 = tf.keras.layers.Dense(20, activation='tanh')
        self.dense3 = tf.keras.layers.Dense(20, activation='tanh')
        self.dense4 = tf.keras.layers.Dense(3, activation='linear')

    #def call(self, x):
     #   x = tf.expand_dims(x, axis=-1)  # Add an extra dimension to make input 2D
      #  x = self.dense1(x)
       # x = self.dense2(x)
      #  x = self.dense3(x)
      #  x = self.dense4(x)
      #  return x

    def call(self, x):
        x = tf.expand_dims(x, axis=-1)  # Add an extra dimension to make input 2D
        x = self.dense1(x)
        x = self.dense2(x)
        x = self.dense3(x)
        x = self.dense4(x)
        # Split the output into three values
        R_p = x[:, 0:1]  # Extract first column as R_p
        R_m = x[:, 1:2]  # Extract second column as R_m
        T = x[:, 2:3]    # Extract third column as T
        return R_p, R_m, T  # Return a tuple of three tensors

def g(x, k):
    # Placeholder function for g(x, k). Replace with actual logic if needed.
    return x * k  # Example computation for g function

# Define loss function with physics-informed constraints
def loss_function(model, omega, n):
    D = 1
    rho = 1
    omega_complex = tf.cast(omega, tf.complex128)  # Ensure omega is treated as complex
    k = (omega_complex**2 * rho / D)**(1/4)

    M = np.array([1])
    K = np.array([1])
    v = np.array([1])
    a = 1

    xmi = np.zeros(n)

    # Populate mialpbet and xmi

    for ind in range(n):
        mialpbet = ((1 / (M * omega_complex**2)) - (1 / (K - 1j * omega_complex * v)))**-1
        xmi[ind] = -0.5 + ind * a

    k3 = k**3
    malf = (2 * D * k3) / mialpbet
    deltaalfbet = np.eye(n)

    Malfbet = np.zeros((n, n), dtype=complex)

    for xalf in range(len(xmi)):
        for xbet in range(len(xmi)):
            Malfbet[xalf, xbet] = malf[xalf] * deltaalfbet[xalf, xbet] - g(xmi[xalf] - xmi[xbet], k[xalf])

    Malfbet1 = np.linalg.inv(Malfbet)

    with tf.GradientTape(persistent=True) as tape:
        tape.watch(omega_complex)  # Watch the complex omega
        R_p, R_m, T = model(omega)

        physics_loss_Rp = tf.cast(R_p, tf.complex128) - (1j / 2) * tf.reduce_sum([Malfbet1[xalf, xbet] * tf.exp(1j * k * (xmi[xalf] + xmi[xbet])) for xalf in range(len(xmi)) for xbet in range(len(xmi))])
        physics_loss_Rm = tf.cast(R_m, tf.complex128) - (1j / 2) * tf.reduce_sum([Malfbet1[xalf, xbet] * tf.exp(-1j * k * (xmi[xalf] + xmi[xbet])) for xalf in range(len(xmi)) for xbet in range(len(xmi))])
        physics_loss_T = tf.cast(T, tf.complex128) - (1 + (1j / 2) * tf.reduce_sum([Malfbet1[xalf, xbet] * tf.exp(1j * k * (xmi[xalf] - xmi[xbet])) for xalf in range(len(xmi)) for xbet in range(len(xmi))]))

        total_loss = tf.reduce_mean(tf.square(tf.abs(physics_loss_Rp)) + tf.square(tf.abs(physics_loss_Rm)) + tf.square(tf.abs(physics_loss_T)))  # Magnitude of the complex loss

    return total_loss




# Training function
def train(model, omega, n, epochs, learning_rate):
    optimizer = tf.keras.optimizers.Adam(learning_rate)

    for epoch in range(epochs):
        with tf.GradientTape() as tape:
            loss = loss_function(model, omega, n)

        gradients = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(gradients, model.trainable_variables))

        if epoch % 100 == 0:
            print(f'Epoch {epoch}, Loss: {loss.numpy()}')

# Initialize PINN model
pinn_model = PINN()

# Training parameters
epochs = 1000
learning_rate = 0.001
Omega = tf.constant(np.arange(0, 100), dtype=tf.float64)
n = 1

# Train the PINN

train(pinn_model, Omega, n, epochs, learning_rate)

# Predict using the trained PINN model
predictions = pinn_model(Omega)

print(pinn_model(Omega))
R_p_pred, R_m_pred, T_pred = pinn_model(Omega)  # Unpack the tuple directly


# Predictions and plotting can remain similar; just ensure proper handling of complex values.
plt.figure(1)
plt.plot(Omega, np.abs(R_m_pred.numpy()), label='|R_m| PINN')  # Convert tensor to numpy array correctly
plt.plot(Omega, np.abs(T_pred.numpy()), '--', label='|T| PINN')  # Properly display magnitudes
plt.xlabel('Omega')
plt.legend()
plt.title('PINN Prediction of |R_m| and |T| vs Omega')
plt.show()


Put full error message here

WARNING:tensorflow:You are casting an input of type complex128 to an incompatible dtype float32. This will discard the imaginary part and may not be what you intended.

And, finally, make sure to include the versions of your packages. Specifically, show us the output of qml.about().

Hi @Pawel_Marciniak , welcome to the Forum!

Unfortunately the code you shared is very complex so we cannot identify if there’s an issue or where it might be coming from.

Given that you’re only getting a warning I suggest that you compare the output with your expected result to see if you’re indeed getting the correct output or not. Casting is often necessary and doesn’t always mean there’s something wrong.

If you think there’s a bug could you please provide the following information? It can help us understand the problem and find possible solutions:

  1. The output of qml.about()

  2. A minimal reproducible example (or minimal working example)
    This is the simplest version of the code that reproduces the problem. It should be self-contained, including all necessary imports, data, functions, etc., so that we can copy-paste the code and reproduce the problem. However it shouldn’t contain any unnecessary data, functions, …, for example gates and functions that can be removed to simplify the code.

  3. The full error traceback.

If you’re not sure what these mean then make sure to check out this video.

I hope this helps!