Default.tensor as qnn.keras.layer

Hello! This post is in regards to my previous reply in Cannot pass a symbolic Keras tensor to postselect argument of qml.measure() when running on Tensorflow graph mode.

I have a issue when I want to train using tf and my custom qnnode.
I had a look at the pennylane tutorials to try and recreate a minimal example.
Example “tutorial_qnn_module_tf.py” (qml/demonstrations/tutorial_qnn_module_tf.py at master · PennyLaneAI/qml · GitHub)

I used the exact code in the tutorial only adding:

import os
os.environ["TF_USE_LEGACY_KERAS"] = "1"

To use keras 2.
When changing the device from default.qubit to default tensor I get:

Epoch 1/6
/usr/local/lib/python3.10/dist-packages/cotengra/interface.py:719: UserWarning: Contraction cache disabled as one of the arguments is not hashable: {'dtype': 'complex128', 'simplify_sequence': 'ADCRS', 'simplify_atol': 0.0}.
  warnings.warn(
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-1-2d534f3efa97> in <cell line: 169>()
    167 # The model is now ready to be trained!
    168 
--> 169 fitting = model.fit(X, y_hot, epochs=6, batch_size=5, validation_split=0.25, verbose=2)
    170 
    171 ###############################################################################

28 frames
/usr/local/lib/python3.10/dist-packages/tf_keras/src/utils/traceback_utils.py in error_handler(*args, **kwargs)
     68             # To get the full stack trace, call:
     69             # `tf.debugging.disable_traceback_filtering()`
---> 70             raise e.with_traceback(filtered_tb) from None
     71         finally:
     72             del filtered_tb

/usr/local/lib/python3.10/dist-packages/pennylane/qnn/keras.py in call(self, inputs)
    412 
    413         # calculate the forward pass as usual
--> 414         results = self._evaluate_qnode(inputs)
    415 
    416         # reshape to the correct number of batch dims

/usr/local/lib/python3.10/dist-packages/pennylane/qnn/keras.py in _evaluate_qnode(self, x)
    435             **{k: 1.0 * w for k, w in self.qnode_weights.items()},
    436         }
--> 437         res = self.qnode(**kwargs)
    438 
    439         if isinstance(res, (list, tuple)):

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/qnode.py in __call__(self, *args, **kwargs)
    985         if qml.capture.enabled():
    986             return qml.capture.qnode_call(self, *args, **kwargs)
--> 987         return self._impl_call(*args, **kwargs)
    988 
    989 

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/qnode.py in _impl_call(self, *args, **kwargs)
    975 
    976         try:
--> 977             res = self._execution_component(args, kwargs)
    978         finally:
    979             if old_interface == "auto":

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/qnode.py in _execution_component(self, args, kwargs)
    933 
    934         # pylint: disable=unexpected-keyword-arg
--> 935         res = qml.execute(
    936             (self._tape,),
    937             device=self.device,

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/execution.py in execute(tapes, device, gradient_fn, interface, transform_program, inner_transform, config, grad_on_execution, gradient_kwargs, cache, cachesize, max_diff, device_vjp, mcm_config)
    622 
    623     if interface in jpc_interfaces:
--> 624         results = ml_boundary_execute(tapes, execute_fn, jpc, device=device)
    625     else:
    626         results = ml_boundary_execute(

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/interfaces/tensorflow.py in tf_execute(tapes, execute_fn, jpc, device, differentiable)
    231     dtype = params_dtype if params_dtype in {tf.float64, tf.complex128} else None
    232     # make sure is float64 if data is float64.  May cause errors otherwise if device returns float32 precision
--> 233     res = _to_tensors(execute_fn(numpy_tapes), dtype=dtype, complex_safe=True)
    234 
    235     @tf.custom_gradient

/usr/local/lib/python3.10/dist-packages/pennylane/workflow/execution.py in inner_execute(tapes, **_)
    200 
    201         if transformed_tapes:
--> 202             results = device.execute(transformed_tapes, execution_config=execution_config)
    203         else:
    204             results = ()

/usr/local/lib/python3.10/dist-packages/pennylane/devices/modifiers/simulator_tracking.py in execute(self, circuits, execution_config)
     28     @wraps(untracked_execute)
     29     def execute(self, circuits, execution_config=DefaultExecutionConfig):
---> 30         results = untracked_execute(self, circuits, execution_config)
     31         if isinstance(circuits, QuantumScript):
     32             batch = (circuits,)

/usr/local/lib/python3.10/dist-packages/pennylane/devices/modifiers/single_tape_support.py in execute(self, circuits, execution_config)
     30             is_single_circuit = True
     31             circuits = (circuits,)
---> 32         results = batch_execute(self, circuits, execution_config)
     33         return results[0] if is_single_circuit else results
     34 

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in execute(self, circuits, execution_config)
    658                 )
    659             circuit = circuit.map_to_standard_wires()
--> 660             results.append(self.simulate(circuit))
    661 
    662         return tuple(results)

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in simulate(self, circuit)
    704             if len(circuit.measurements) == 1:
    705                 return self.measurement(circuit.measurements[0])
--> 706             return tuple(self.measurement(mp) for mp in circuit.measurements)
    707 
    708         raise NotImplementedError  # pragma: no cover

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in <genexpr>(.0)
    704             if len(circuit.measurements) == 1:
    705                 return self.measurement(circuit.measurements[0])
--> 706             return tuple(self.measurement(mp) for mp in circuit.measurements)
    707 
    708         raise NotImplementedError  # pragma: no cover

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in measurement(self, measurementprocess)
    728         """
    729 
--> 730         return self._get_measurement_function(measurementprocess)(measurementprocess)
    731 
    732     def _get_measurement_function(

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in expval(self, measurementprocess)
    766 
    767         obs = measurementprocess.obs
--> 768         return expval_core(obs, self)
    769 
    770     def state(self, measurementprocess: MeasurementProcess):  # pylint: disable=unused-argument

/usr/lib/python3.10/functools.py in wrapper(*args, **kw)
    887                             '1 positional argument')
    888 
--> 889         return dispatch(args[0].__class__)(*args, **kw)
    890 
    891     funcname = getattr(func, '__name__', 'singledispatch function')

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in expval_core(obs, device)
   1022 def expval_core(obs: Observable, device) -> float:
   1023     """Dispatcher for expval."""
-> 1024     return device._local_expectation(qml.matrix(obs), tuple(obs.wires))
   1025 
   1026 

/usr/local/lib/python3.10/dist-packages/pennylane/devices/default_tensor.py in _local_expectation(self, matrix, wires)
    808         qc = self._quimb_circuit.copy()
    809 
--> 810         exp_val = qc.local_expectation(
    811             matrix,
    812             wires,

/usr/local/lib/python3.10/dist-packages/quimb/tensor/circuit.py in local_expectation(self, G, where, normalized, **contract_opts)
   4684         float
   4685         """
-> 4686         return self._psi.local_expectation_canonical(
   4687             G,
   4688             where,

/usr/local/lib/python3.10/dist-packages/quimb/tensor/tensor_1d.py in local_expectation_canonical(self, G, where, normalized, info, **contract_opts)
   2663         float
   2664         """
-> 2665         rho = self.partial_trace_to_dense_canonical(
   2666             where, normalized=normalized, info=info, **contract_opts
   2667         )

/usr/local/lib/python3.10/dist-packages/quimb/tensor/tensor_1d.py in partial_trace_to_dense_canonical(self, where, normalized, info, **contract_opts)
   2626 
   2627         # contract down to a matrix
-> 2628         rho = rho_tn.to_dense(kix, bix, **contract_opts)
   2629 
   2630         if normalized:

/usr/local/lib/python3.10/dist-packages/quimb/tensor/tensor_core.py in to_dense(self, to_qarray, *inds_seq, **contract_opts)
   8834         """
   8835         tags = contract_opts.pop("tags", all)
-> 8836         t = self.contract(
   8837             tags,
   8838             output_inds=tuple(concat(inds_seq)),

/usr/local/lib/python3.10/dist-packages/quimb/tensor/tensor_core.py in contract(self, tags, output_inds, optimize, get, backend, preserve_tensor, max_bond, inplace, **opts)
   8543         # contracting everything to single output
   8544         if all_tags and not inplace:
-> 8545             return tensor_contract(*self.tensor_map.values(), **opts)
   8546 
   8547         # contract some or all tensors, but keeping tensor network

/usr/lib/python3.10/functools.py in wrapper(*args, **kw)
    887                             '1 positional argument')
    888 
--> 889         return dispatch(args[0].__class__)(*args, **kw)
    890 
    891     funcname = getattr(func, '__name__', 'singledispatch function')

/usr/local/lib/python3.10/dist-packages/quimb/tensor/tensor_core.py in tensor_contract(output_inds, optimize, get, backend, preserve_tensor, drop_tags, *tensors, **contract_opts)
    295 
    296     # perform the contraction!
--> 297     data_out = array_contract(
    298         arrays,
    299         inds,

/usr/local/lib/python3.10/dist-packages/quimb/tensor/contraction.py in array_contract(arrays, inputs, output, optimize, backend, **kwargs)
    284     if backend is None:
    285         backend = get_contract_backend()
--> 286     return ctg.array_contract(
    287         arrays, inputs, output, optimize=optimize, backend=backend, **kwargs
    288     )

/usr/local/lib/python3.10/dist-packages/cotengra/interface.py in array_contract(arrays, inputs, output, optimize, cache_expression, backend, **kwargs)
    784     """
    785     shapes = tuple(map(ar.shape, arrays))
--> 786     expr = array_contract_expression(
    787         inputs,
    788         output,

/usr/local/lib/python3.10/dist-packages/cotengra/interface.py in array_contract_expression(inputs, output, size_dict, shapes, optimize, constants, canonicalize, cache, **kwargs)
    722             )
    723 
--> 724             expr = _build_expression(
    725                 inputs, output, size_dict, optimize=optimize, **kwargs
    726             )

TypeError: Exception encountered when calling layer 'keras_layer' (type KerasLayer).

_build_expression() got an unexpected keyword argument 'dtype'

Call arguments received by layer 'keras_layer' (type KerasLayer):
  • inputs=tf.Tensor(shape=(5, 2), dtype=float64)

Is there a certain method to use default.tensor with tf?
Is there perhaps an example on using default.tensor in a tf environment?
I am running this on google colab but I get the same error on my own venv and on an ssh server.
Thank you in advance for the help.
This is my qml.about:

Name: PennyLane
Version: 0.39.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-6.1.85+-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.26.4
Scipy version:           1.13.1
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.39.0)
- default.clifford (PennyLane-0.39.0)
- default.gaussian (PennyLane-0.39.0)
- default.mixed (PennyLane-0.39.0)
- default.qubit (PennyLane-0.39.0)
- default.qutrit (PennyLane-0.39.0)
- default.qutrit.mixed (PennyLane-0.39.0)
- default.tensor (PennyLane-0.39.0)
- null.qubit (PennyLane-0.39.0)
- reference.qubit (PennyLane-0.39.0)

Hi @Lucas !

Thank you for your question. default.tensor is a relatively new device and it may be incompatible with Keras2. Let me check with the team and get back to you.

Just to be sure, what’s the exact TensorFlow version you’re using?

Hi Catalina,

Thank you for your quick response.
I am using tensorflow==2.16.2
But I have also tried it with other version, I don’t quite remember all of them, but until 2.15.0.

I had a look in the source code and the issue seems to stem from pennylane/workflow/interfaces/tensorflow_autograph line 178.

I messed around with the dtypes but then got other issues.
I think casting to different dtypes messed with how the tensors were saved on the cache on the cpu.

Thank you for your help!

Hi @Lucas ,

It seems that the issue might be due to a bug in the newest version of quimb, which is what default.tensor uses to do its magic. We have an open PR to fix this and there’s an open discussion in the quimb repository too.

In the meantime, the recommendation would be to try with quimb 1.8.4 .

Let us know if this works for you or if you’re still having the same issue!

Hi Catalina,

Hope you are doing well.
Quimb 1.8.4 did not seem to solve the issue.
I’ve been closely following the quimb issue you linked on github.
Supposedly the issue was resolved in the update yesterday quimb 1.10.0.

I ran my code with default.qubit and everything works just fine.
But once again when I run the code with default.tensor I get an error.

import tensorflow as tf
from tensorflow.keras import layers
import pennylane as qml
from pennylane import numpy as np
tf.keras.backend.set_floatx('float64')
def generator_model_policy(qubits, n_layers, n_actions):
    dev = qml.device("default.tensor", wires=qubits, method="mps", max_bond_dim=50, cutoff=np.finfo(np.float64).eps)
    #dev = qml.device("default.qubit", wires=qubits)
    @qml.qnode(dev, interface='tf',diff_method="best")  
    def circuit(inputs, params):
        for l in range(n_layers):
            for qubit in range(qubits):
                qml.RX(params[l, qubit, 0], wires=qubit)
                qml.RY(params[l, qubit, 1], wires=qubit)
                qml.RZ(params[l, qubit, 2], wires=qubit)
            for qubit in range(qubits - 1):
                qml.CNOT(wires=[qubit, qubit + 1])
            for qubit in range(qubits):
                qml.RX(inputs[:,qubit], wires=qubit)

        result = [qml.expval(qml.PauliZ(qubit))for qubit in range(qubits)]
        return result
    input_tensor = tf.keras.Input(shape=(qubits, ), dtype=tf.dtypes.float32, name='input')
    weight_shapes = {"params": (n_layers + 1, qubits, 3)}
    qlayer = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim= qubits)(input_tensor)
    model = tf.keras.Model(inputs=[input_tensor], outputs=qlayer)
    model.summary()
    return model

def make_discriminator_model_conv(chopsize):
    model = tf.keras.Sequential()
    model.add(layers.Convolution1D(64,10,padding='same',input_shape=(chopsize,1)))
    model.add(layers.LeakyReLU(0.2))
    model.add(layers.Convolution1D(128,10,padding='same'))
    model.add(layers.LeakyReLU(0.2))
    model.add(layers.Convolution1D(128,10,padding='same'))
    model.add(layers.LeakyReLU(0.2))
    model.add(layers.Flatten())
    model.add(layers.Dense(32))
    model.add(layers.LeakyReLU(0.2))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(1))
    model.summary()
    return model


qubits = 4
n_layers = 2
n_actions = 2

#model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.1), loss='mean_squared_error')

# Define the input data
X = np.random.random((1, qubits))
Y = np.random.random((1, qubits))


def wasserstein_loss_critic(real_output: tf.Tensor, fake_output: tf.Tensor) -> tf.Tensor:
    """The Wasserstein loss of the discriminator from its output on fake and real data.

    :param real_output: Output of the discriminator when given real data.
    :param fake_output: Output of the discriminator when given fake data from generator.

    :return: Wasserstein loss evaluation for the discriminator

    """
    real_loss = tf.reduce_mean(real_output)
    fake_loss = tf.reduce_mean(fake_output)
    return fake_loss - real_loss

def wasserstein_loss_generator(fake_output: tf.Tensor) -> tf.Tensor:
    """The Wasserstein loss of the generator from the output of the discriminator on fake data.

    :param fake_output: Output of the discriminator when given fake data from generator.

    :return: Cross entropy with Fake/True labels given by discriminator.

    """

    return -tf.reduce_mean(fake_output)

@tf.function
def gradient_penalty(critic:tf.keras.Model, fake_generator_output: tf.Tensor, images: tf.Tensor) -> tf.Tensor:
    """ Penalty from the gradients"""

    alpha = tf.random.uniform(
        [images.shape[0]])
    images = tf.cast(images, tf.float64)
    difference_fake_real = fake_generator_output - images 
    alpha = tf.cast(alpha, tf.float64)
    interpolation = images + (alpha * difference_fake_real)
    with tf.GradientTape() as tape:
        tape.watch(interpolation)
        preds = critic(interpolation, training=True)
    gradients = tape.gradient(preds, [interpolation])[0]
    slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), axis=[1]))#, 2, 3
    gp = tf.reduce_mean((slopes - 1.0) ** 2)
    return gp

@tf.function
def train_generator(gan_instance:tf.keras.Model, critic:tf.keras.Model, noise: tf.Tensor):
    """ Update the generator parameter with its optimizer"""

    with tf.GradientTape() as tape:
        generated_images = gan_instance(noise, training=False)
        generated_images = tf.expand_dims(generated_images, -1)
        fake_output = critic(generated_images, training=True)

        gen_loss = wasserstein_loss_generator(fake_output)

    gradients_of_generator = tape.gradient(
        gen_loss, gan_instance.trainable_variables
    )
    generator_optimizer.apply_gradients(
        zip(gradients_of_generator, gan_instance.trainable_variables)
    )

@tf.function
def train_critic(gan_instance:tf.keras.Model, critic:tf.keras.Model, noise: tf.Tensor, images: tf.Tensor):
    """Update the generator parameter with its optimizer

    :param noise: Input of the generator
    :param images: Real images as input for the discriminator.
    """

    with tf.GradientTape() as tape:
        generated_images = gan_instance(noise, training=False)
        generated_images = tf.expand_dims(generated_images, -1)
        images = tf.expand_dims(images, -1)
        real_output = critic(images, training=True)
        fake_output = critic(generated_images, training=True)
        disc_loss = wasserstein_loss_critic(real_output, fake_output)
        penalty_loss = gradient_penalty(critic, generated_images, images)
        gradient_penalty_weight = 10.0
        disc_loss += penalty_loss * gradient_penalty_weight

    gradients_of_critic = tape.gradient(disc_loss, critic.trainable_variables)
    discriminator_optimizer.apply_gradients(
        zip(gradients_of_critic, critic.trainable_variables)
    )

# Create the generator and critic
generator = generator_model_policy(qubits, n_layers, n_actions)
critic = make_discriminator_model_conv(qubits)
generator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
discriminator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)

# Train the model
epochs = 10
for epoch in range(epochs):
    noise = np.random.random((1, qubits))
    images = np.random.random((1, qubits))
    train_critic(generator, critic, noise, images)
    train_generator(generator, critic, noise)
    print("Epoch", epoch, "done.")


The error I get is:

2024-12-20 13:19:39.760285: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input (InputLayer)          [(None, 4)]               0         
                                                                 
 keras_layer (KerasLayer)    (None, 4)                 36        
                                                                 
=================================================================
Total params: 36 (288.00 Byte)
Trainable params: 36 (288.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv1d (Conv1D)             (None, 4, 64)             704       
                                                                 
 leaky_re_lu (LeakyReLU)     (None, 4, 64)             0         
                                                                 
 conv1d_1 (Conv1D)           (None, 4, 128)            82048     
                                                                 
 leaky_re_lu_1 (LeakyReLU)   (None, 4, 128)            0         
                                                                 
 conv1d_2 (Conv1D)           (None, 4, 128)            163968    
                                                                 
 leaky_re_lu_2 (LeakyReLU)   (None, 4, 128)            0         
                                                                 
 flatten (Flatten)           (None, 512)               0         
                                                                 
 dense (Dense)               (None, 32)                16416     
                                                                 
 leaky_re_lu_3 (LeakyReLU)   (None, 32)                0         
                                                                 
 dropout (Dropout)           (None, 32)                0         
                                                                 
 dense_1 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 263169 (2.01 MB)
Trainable params: 263169 (2.01 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Traceback (most recent call last):
  File "/Users/lucas/Quantum_Finance_real/Improved_code/Secondary/iehrbferf.py", line 153, in <module>
    train_critic(generator, critic, noise, images)
  File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_filevzn39xd8.py", line 14, in tf__train_critic
    generated_images = ag__.converted_call(ag__.ld(gan_instance), (ag__.ld(noise),), dict(training=False), fscope)
  File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_filesq2gztw4.py", line 37, in tf__call
    results = ag__.converted_call(ag__.ld(self)._evaluate_qnode, (ag__.ld(inputs),), None, fscope)
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 61, in tf___evaluate_qnode
    ag__.if_stmt(ag__.converted_call(ag__.ld(isinstance), (ag__.ld(res), (ag__.ld(list), ag__.ld(tuple))), None, fscope), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_', 'res'), 2)
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 45, in if_body_1
    ag__.if_stmt(ag__.converted_call(ag__.ld(len), (ag__.ld(x).shape,), None, fscope) > 1, if_body, else_body, get_state, set_state, ('res',), 1)
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 40, in if_body
    res = [ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(r), (ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(x),), None, fscope)[0], ag__.converted_call(ag__.ld(tf).reduce_prod, (ag__.ld(r).shape[1:],), None, fscope))), None, fscope) for r in ag__.ld(res)]
  File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 40, in <listcomp>
    res = [ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(r), (ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(x),), None, fscope)[0], ag__.converted_call(ag__.ld(tf).reduce_prod, (ag__.ld(r).shape[1:],), None, fscope))), None, fscope) for r in ag__.ld(res)]
ValueError: in user code:

    File "/Users/lucas/Quantum_Finance_real/Improved_code/Secondary/iehrbferf.py", line 127, in train_critic  *
        generated_images = gan_instance(noise, training=False)
    File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler  **
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_filesq2gztw4.py", line 37, in tf__call
        results = ag__.converted_call(ag__.ld(self)._evaluate_qnode, (ag__.ld(inputs),), None, fscope)
    File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 61, in tf___evaluate_qnode
        ag__.if_stmt(ag__.converted_call(ag__.ld(isinstance), (ag__.ld(res), (ag__.ld(list), ag__.ld(tuple))), None, fscope), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_', 'res'), 2)
    File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 45, in if_body_1
        ag__.if_stmt(ag__.converted_call(ag__.ld(len), (ag__.ld(x).shape,), None, fscope) > 1, if_body, else_body, get_state, set_state, ('res',), 1)
    File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 40, in if_body
        res = [ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(r), (ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(x),), None, fscope)[0], ag__.converted_call(ag__.ld(tf).reduce_prod, (ag__.ld(r).shape[1:],), None, fscope))), None, fscope) for r in ag__.ld(res)]
    File "/var/folders/dz/jgrz95p17l1g0j9ld7jyp7kc0000gn/T/__autograph_generated_fileon30z2bz.py", line 40, in <listcomp>
        res = [ag__.converted_call(ag__.ld(tf).reshape, (ag__.ld(r), (ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(x),), None, fscope)[0], ag__.converted_call(ag__.ld(tf).reduce_prod, (ag__.ld(r).shape[1:],), None, fscope))), None, fscope) for r in ag__.ld(res)]

    ValueError: Exception encountered when calling layer 'keras_layer' (type KerasLayer).
    
    in user code:
    
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/qnn/keras.py", line 414, in call  *
            results = self._evaluate_qnode(inputs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/qnn/keras.py", line 443, in _evaluate_qnode  *
            res = [tf.reshape(r, (tf.shape(x)[0], tf.reduce_prod(r.shape[1:]))) for r in res]
    
        ValueError: Cannot convert a partially known TensorShape <unknown> to a Tensor.
    
    
    Call arguments received by layer 'keras_layer' (type KerasLayer):
      • inputs=tf.Tensor(shape=(1, 4), dtype=float64)

This is an error I get before the update as well so maybe I am doing something wrong.
qml.about:

Name: PennyLane
Version: 0.39.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, toml, typing-extensions
Required-by: PennyLane-Cirq, PennyLane_Lightning

Platform info:           macOS-10.16-x86_64-i386-64bit
Python version:          3.10.15
Numpy version:           1.26.4
Scipy version:           1.14.1
Installed devices:
- default.clifford (PennyLane-0.39.0)
- default.gaussian (PennyLane-0.39.0)
- default.mixed (PennyLane-0.39.0)
- default.qubit (PennyLane-0.39.0)
- default.qutrit (PennyLane-0.39.0)
- default.qutrit.mixed (PennyLane-0.39.0)
- default.tensor (PennyLane-0.39.0)
- null.qubit (PennyLane-0.39.0)
- reference.qubit (PennyLane-0.39.0)
- cirq.mixedsimulator (PennyLane-Cirq-0.39.0)
- cirq.pasqal (PennyLane-Cirq-0.39.0)
- cirq.qsim (PennyLane-Cirq-0.39.0)
- cirq.qsimh (PennyLane-Cirq-0.39.0)
- cirq.simulator (PennyLane-Cirq-0.39.0)
- lightning.qubit (PennyLane_Lightning-0.39.0)

Thank you in advance and happy holidays.

Btw, model.fit does work. I think the issue is with tf.GradientTape.

Hi @Lucas ,

Thank you for reporting this, I can replicate your issue.
I’ve shared the info with our dev team. Since it’s almost the end of the year responses may take a couple of weeks while everyone comes back from vacation.

Thanks again for helping us improve and let us know if you encounter any other issues.