Cannot pass a symbolic Keras tensor to postselect argument of qml.measure() when running on Tensorflow graph mode

Hi. I find out the postselection argument cannot receive a symbolic Keras tensor when running the program as the graph mode. Is there a solution to this problem?

What I want to achieve is to perform a measurement one the third qubit, and output the density matrix of the first qubit and the measurement result. To do this, first I use a circuit to return the measurement probability of the third qubit (the ‘selecuit’ in the code), then I manually pick up a value based on the probability (the ‘measure’ variable in the code), and pass the value to the postselect argument in the qml.measure() (at the ‘qcircuit’ in the code).

However, it seems that the postselect argument cannot receive a symbolic tensor when running the Tensorflow graph mode. Is there a way to achieve what I want? Thanks in advance.

I also appreciate any other approaches to achieve my goal. I think my current programming strategy is a bit clumsy.

The following is a simplified version of my code:

import tensorflow as tf
import keras
from keras import mixed_precision
from keras.layers import Concatenate, concatenate
from keras.models import Model
from functools import partial
import pennylane as qml
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.python.ops.numpy_ops import np_config
import random
import sys
import math
import time

tf.get_logger().setLevel('ERROR')
DEFAULT_TENSOR_TYPE = tf.float64
################################################################
#Parameters
################################################################
qbit = 2
bit = 2**qbit
num_epoch = 100000
LL = 40
QLL = 10
TT = 5
lamt = 0.1
sam = 1
################################################################
#Data preparation
################################################################
initial_data = tf.constant([[[ 0.70819672+0.j,         -0.43169441-0.14245724j,],
  [-0.43169441+0.14245724j,  0.29180328+0.j        ]]], dtype=tf.complex64)
initial_dataset = tf.data.Dataset.from_tensors(initial_data)
target_data = tf.constant([[[ 0.24980155+0.j,          0.40491117-0.15312634j],
  [ 0.40491117+0.15312634j,  0.75019845+0.j        ]]], dtype=tf.complex64)
target_dataset = tf.data.Dataset.from_tensors(target_data)
D_set = tf.data.Dataset.zip(initial_dataset,target_dataset)
################################################################
#Quantumm circuit
################################################################
dev = qml.device("default.qubit", wires=qbit+1)
@qml.qnode(dev, interface='tf')
def selecuit(inputs,theta,dir):
    qml.StatePrep(inputs, wires=0)
    for i in range(QLL):
        qml.RX(theta[i+0], wires=0)
        qml.RX(theta[i+1], wires=1)
        qml.RZ(theta[i+2], wires=0)
        qml.RZ(theta[i+3], wires=1)
        qml.CZ(wires=[0,1])
    qml.RY(-np.pi/2,wires = 2)
    qml.QubitUnitary(weak_measure(dir), wires=[0,1])
    return qml.probs(wires=[1])
@qml.qnode(dev, interface='tf')
def qcircuit(inputs,theta,dir,ps):
    qml.StatePrep(inputs, wires=0)
    for i in range(QLL):
        qml.RX(theta[i+0], wires=0)
        qml.RX(theta[i+1], wires=1)
        qml.RZ(theta[i+2], wires=0)
        qml.RZ(theta[i+3], wires=1)
        qml.CZ(wires=[0,1])
    qml.RY(-np.pi/2,wires = 2)
    qml.QubitUnitary(weak_measure(dir), wires=[0,2])
    qml.measure(1,postselect = ps)
    return qml.density_matrix([0])
def weak_measure(dir): #dir[0]: X-direction; dir[1]: Y-direction; dir[2]: Z-direction
    cos = tf.math.cos(lamt)
    sin = tf.math.sin(lamt)
    cosc = tf.cast(cos,tf.complex64)
    sinc = tf.cast(cos,tf.complex64)
    def Xmatrix(): return tf.Variable(
        [[cos,0.0,0.0,-sin],
         [0.0,cos,sin,0.0],
         [0.0,-sin,cos,0.0],
         [sin,0.0,0.0,cos]]
        , dtype=tf.complex64)
    def Ymatrix(): return tf.Variable(
        [[cosc,0.0+0.j,0.0+0.j,0.0+sinc],
         [0.0+0.j,cosc,0.0-sinc,0.0+0.j],
         [0.0+0.j,0.0-sinc,cosc,0.0+0.j],
         [0.0+sinc,0.0+0.j,0.0+0.j,cosc]]
        , dtype=tf.complex64)
    def Zmatrix(): return tf.Variable(
        [[cos,-sin,0.0,0.0],
         [sin,cos,0.0,0.0],
         [0.0,0.0,cos,sin],
         [0.0,0.0,-sin,cos]]
        , dtype=tf.complex64)
    matrix = tf.case([(tf.slice(dir,[0],[1]) == 1, Xmatrix), (tf.slice(dir,[1],[1]) == 1, Ymatrix),(tf.slice(dir,[2],[1]) == 1, Zmatrix)])
    return matrix
def convert_to_Svector(dm):
    a,b = tf.linalg.eigh(dm)
    c = tf.slice(b,[0,1],[2,1])
    c = tf.reshape(c,[2])
    return c
################################################################
#Loss function
################################################################
def MMDloss(initial,target):
    initial = tf.reshape(initial,[2,2])
    target = tf.reshape(target,[2,2])
    inp = qml.math.fidelity(initial,initial)
    return inp
# ################################################################
#Custom Model
################################################################
def softmax(bdir):
    adir = tf.nn.softmax(bdir)
    adir = tf.math.top_k(adir,k=1)
    def fxx(): return tf.constant([1,0,0], dtype=tf.int32)
    def fyy(): return tf.constant([0,1,0], dtype=tf.int32)
    def fzz(): return tf.constant([0,0,1], dtype=tf.int32)
    ax, ay, az = tf.constant(0, dtype=tf.int32),tf.constant(1, dtype=tf.int32),tf.constant(2, dtype=tf.int32)
    dir = tf.case([(adir.indices == ax, fxx), (adir.indices == ay, fyy),(adir.indices == az, fzz)])
    return dir
class RNNModel(keras.Model):
    def __init__(self, model1):
        super().__init__()
        self.model1 = model1
        self.mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
    def train_step(self, data):
        qs, target = data
        loss = 0
        with tf.GradientTape() as tape:
            #1
            h_state11,c_state11,h_state12,c_state12 = self.model1(tf.zeros([1,1,4], dtype=DEFAULT_TENSOR_TYPE),None,None,None,None, training=True)
            dm1,dir_measure1 = self.data_flowing(qs[0,:,:],h_state12,1)
            pred = tf.reshape(dm1,[1,2,2])
            for i in range(sam-1):
                #1
                h_state11,c_state11,h_state12,c_state12 = self.model1(tf.zeros([1,1,4]),None,None,None,None, training=True)
                dm1,dir_measure1 = self.data_flowing(qs[i+1,:,:],h_state12,1)
                dm1 = tf.reshape(dm1,[1,2,2])
                pred = tf.concat([pred,dm1],0)
            loss = MMDloss(pred, target)
        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)
        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        # Update metrics (includes the metric that tracks the loss)
        self.mae_metric.update_state(pred,target)
        return { "mae": self.mae_metric.result()}
    def data_flowing(self,idm,h_state,nn):
        istate = convert_to_Svector(idm)
        istate = tf.convert_to_tensor(istate)
        h_state = tf.reshape(h_state, [LL*qbit+3])
        theta = tf.slice(h_state,begin=[0],size=[LL*qbit])
        dir = tf.constant([1, 0, 0], shape=(3,), dtype=tf.int32)
        measure = selecuit(istate,theta,dir)
        measure = tf.reshape(measure, [1,2])
        measure = tf.random.categorical(tf.math.log(measure),1,dtype=tf.int32)
        measure = tf.reshape(measure, [])
        odm = qcircuit(istate,theta,dir,measure)
        mreasure = tf.constant([1], shape=(3,), dtype=tf.int32)
        measure = tf.reshape(measure,[1,1])
        dir = tf.reshape(dir,[1,3])
        odir_measure = tf.concat([measure, dir], 1)
        odir_measure = tf.reshape(odir_measure,[1,1,4])
        odir_measure = tf.cast(odir_measure,dtype=DEFAULT_TENSOR_TYPE)
        return odm,odir_measure
#######################################################################
#LTSM
#######################################################################
class sModel(keras.Model):
    def __init__(self):
        super().__init__()
        self.RNN1 = tf.keras.layers.LSTM(LL*qbit, return_state=True)
        self.RNN2 = tf.keras.layers.LSTM(LL*qbit+3, return_state=True)

    def call(self, input,state1h,state1c,state2h,state2c):
        if state1c == None and state1h == None:
            RNN_11,RNN_12,RNN_13 = self.RNN1(input, training=True)
        else:
            RNN_11,RNN_12,RNN_13 = self.RNN1(input, initial_state=[state1c,state1h], training=True)
        RNN_11 = tf.reshape(RNN_11,[1,1,LL*qbit])
        if state2c == None and state2h == None:
            RNN_21,RNN_22,RNN_23 = self.RNN2(RNN_11, training=True)
        else:
            RNN_21,RNN_22,RNN_23 = self.RNN2(RNN_11, initial_state=[state2c,state2h], training=True)
        RNN_11 = tf.reshape(RNN_11,[1,LL*qbit])
        return RNN_11,RNN_13,RNN_21,RNN_23
################################################################
#Excuting 
################################################################
model1= sModel()
model = RNNModel(model1)
breakpoint()
model.compile(optimizer="Adam")
ii = tf.ones([1,1,4], dtype=DEFAULT_TENSOR_TYPE)
state = tf.zeros([1,LL*qbit], dtype=DEFAULT_TENSOR_TYPE)
state1 = tf.zeros([1,LL*qbit+3], dtype=DEFAULT_TENSOR_TYPE)
# model.run_eagerly = True
model.fit(x = D_set, batch_size=None, epochs=num_epoch)
print(model)
print(model.summary())

If I uncomment ‘model.run_eagerly = True’, this demo can run with no error message since symbolic tensors are only used under the graph mode.
Else if I manually pick up a value for postselect, the error message jumps to the next one (Sorry the code under the graph mode needs further debuggings).

The error message is:

Traceback (most recent call last):
  File "/home/max/qc/RNN/ps.py", line 202, in <module>
    model.fit(x = D_set, batch_size=None, epochs=num_epoch)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/tmp/__autograph_generated_fileqeq8t4u6.py", line 15, in tf__train_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/home/max/qc/RNN/ps.py", line 133, in train_step
    dm1,dir_measure1 = self.data_flowing(qs[0,:,:],h_state12,1)
  File "/home/max/qc/RNN/ps.py", line 160, in data_flowing
    odm = qcircuit(istate,theta,dir,measure)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1164, in __call__
    return self._impl_call(*args, **kwargs)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1150, in _impl_call
    res = self._execution_component(args, kwargs, override_shots=override_shots)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1103, in _execution_component
    res = qml.execute(
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/execution.py", line 650, in execute
    tapes, post_processing = transform_program(tapes)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/core/transform_program.py", line 515, in __call__
    new_tapes, fn = transform(tape, *targs, **tkwargs)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/devices/preprocess.py", line 172, in mid_circuit_measurements
    return qml.defer_measurements(tape, device=device)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/core/transform_dispatcher.py", line 113, in __call__
    transformed_tapes, processing_fn = self._transform(obj, *targs, **tkwargs)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/defer_measurements.py", line 310, in defer_measurements
    new_operations.append(qml.Projector([op.postselect], wires=op.wires[0]))
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/capture/capture_meta.py", line 89, in __call__
    return type.__call__(cls, *args, **kwargs)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/ops/qubit/observables.py", line 452, in __init__
    state = tuple(qml.math.toarray(state).astype(int))
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 81, in do
    return func(*args, **kwargs)
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 1524, in numpy_to_numpy
    return do("asarray", x, like="numpy")
  File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 81, in do
    return func(*args, **kwargs)
NotImplementedError: in user code:

    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/keras/src/engine/training.py", line 1338, in train_function  *
        return step_function(self, iterator)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/keras/src/engine/training.py", line 1322, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/keras/src/engine/training.py", line 1303, in run_step  **
        outputs = model.train_step(data)
    File "/home/max/qc/RNN/ps.py", line 133, in train_step
        dm1,dir_measure1 = self.data_flowing(qs[0,:,:],h_state12,1)
    File "/home/max/qc/RNN/ps.py", line 160, in data_flowing
        odm = qcircuit(istate,theta,dir,measure)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1164, in __call__
        return self._impl_call(*args, **kwargs)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1150, in _impl_call
        res = self._execution_component(args, kwargs, override_shots=override_shots)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/qnode.py", line 1103, in _execution_component
        res = qml.execute(
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/workflow/execution.py", line 650, in execute
        tapes, post_processing = transform_program(tapes)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/core/transform_program.py", line 515, in __call__
        new_tapes, fn = transform(tape, *targs, **tkwargs)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/devices/preprocess.py", line 172, in mid_circuit_measurements
        return qml.defer_measurements(tape, device=device)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/core/transform_dispatcher.py", line 113, in __call__
        transformed_tapes, processing_fn = self._transform(obj, *targs, **tkwargs)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/transforms/defer_measurements.py", line 310, in defer_measurements
        new_operations.append(qml.Projector([op.postselect], wires=op.wires[0]))
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/capture/capture_meta.py", line 89, in __call__
        return type.__call__(cls, *args, **kwargs)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/pennylane/ops/qubit/observables.py", line 452, in __init__
        state = tuple(qml.math.toarray(state).astype(int))
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 81, in do
        return func(*args, **kwargs)
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 1524, in numpy_to_numpy
        return do("asarray", x, like="numpy")
    File "/root/miniconda3/envs/tc/lib/python3.9/site-packages/autoray/autoray.py", line 81, in do
        return func(*args, **kwargs)

    NotImplementedError: Cannot convert a symbolic tf.Tensor (Reshape_49:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.

The qml.about() output is:

Name: PennyLane
Version: 0.37.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /root/miniconda3/envs/tc/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version:          3.9.18
Numpy version:           1.24.3
Scipy version:           1.13.1
Installed devices:
- default.clifford (PennyLane-0.37.0)
- default.gaussian (PennyLane-0.37.0)
- default.mixed (PennyLane-0.37.0)
- default.qubit (PennyLane-0.37.0)
- default.qubit.autograd (PennyLane-0.37.0)
- default.qubit.jax (PennyLane-0.37.0)
- default.qubit.legacy (PennyLane-0.37.0)
- default.qubit.tf (PennyLane-0.37.0)
- default.qubit.torch (PennyLane-0.37.0)
- default.qutrit (PennyLane-0.37.0)
- default.qutrit.mixed (PennyLane-0.37.0)
- default.tensor (PennyLane-0.37.0)
- null.qubit (PennyLane-0.37.0)
- lightning.qubit (PennyLane-Lightning-0.37.0)
None

Hey @Bear2s,

This is most likely a bug! We’re working on making a bug report that summarizes the issue in a more minimal way :slight_smile:

Thank for the reply. Shall I report the issue somewhere for the bug report? Sorry I am new to this.

Hi @Bear2s,

Thank you for reporting this problem!

It would be great if you could open a bug report in our GitHub repo. This report will allow the core PennyLane developers to understand this issue and look into fixing it.

It’s important to share a minimal reproducible example (like the one you shared above) so that we can focus on what’s actually causing the problem. A minimal reproducible example (or minimal working example) is the simplest version of the code that reproduces the problem. It should be self-contained, including all necessary imports, data, functions, etc., so that we can copy-paste the code and reproduce the problem. However it shouldn’t contain any unnecessary data, functions, …, for example gates and functions that can be removed to simplify the code.

When asked about error tracebacks in the GitHub bug template, please include the full error traceback, as you did above.

If you have any questions about making the bug report please let us know!

Thanks for your detailed explanation. I have further simplified the code and uploaded it :smile:

Thanks @Bear2s !

I’m linking the bug report here for anyone else looking into this question.

Hey, I am still having a similar issue.
When I try to run the following code:

def generate_model_policy(qubits, n_layers, n_actions, max_bond=50, cutoff=np.finfo(np.complex128).eps):
    """Generates a Keras model for a data re-uploading PQC policy."""

    dev = qml.device("default.tensor", wires=qubits, method="mps", max_bond_dim=max_bond, cutoff=cutoff)
    #dev = qml.device("default.qubit", wires=qubits)
    
    @qml.qnode(dev, interface='tf')     
    def circuit(inputs, params):
        #params = tf.reshape(params, (n_layers + 1, n_qubits, 3))
        inputs = tf.reshape(inputs[0], (n_layers, qubits))
        expectations = []
        for l in range(n_layers):
            for qubit in range(qubits):
                qml.RX(params[l, qubit, 0], wires=qubit)
                qml.RY(params[l, qubit, 1], wires=qubit)
                qml.RZ(params[l, qubit, 2], wires=qubit)
            for qubit in range(qubits - 1):
                qml.CNOT(wires=[qubit, qubit + 1])
            for qubit in range(qubits):
                qml.RX(inputs[l, qubit], wires=qubit)
        for qubit in range(qubits):
            qml.RX(params[n_layers, qubit, 0], wires=qubit)
            qml.RY(params[n_layers, qubit, 1], wires=qubit)
            qml.RZ(params[n_layers, qubit, 2], wires=qubit)
        
        for qubit in range(qubits):
            expectations.append(qml.expval(op=qml.PauliZ(qubit)))
            expectations.append(qml.expval(op=qml.PauliX(qubit)))

        expectations = np.asarray(expectations)
        print("expectations shape:", expectations)
        return tf.cast(expectations, tf.float32)


    input_tensor = tf.keras.Input(shape=(qubits, ), dtype=tf.dtypes.float32, name='input')
    proccessinng_inoput = Input_procceeding(n_layers, qubits)(input_tensor)
    print("proccessinng_inoput shape:", proccessinng_inoput.shape)
    weight_shapes = {"params": (n_layers + 1, qubits, 3)}
    qlayer = qml.qnn.KerasLayer(circuit, weight_shapes, output_dim=2*qubits)(proccessinng_inoput)
    print("qlayer shape:", qlayer.shape)
    process = tf.keras.Sequential([
        Alternating(n_actions),
    ], name="observables-policy")
    policy = process(qlayer)
    model = tf.keras.Model(inputs=input_tensor, outputs=policy)
    model.summary()
    return model

I get the following error:

 TypeError: Exception encountered when calling layer 'keras_layer' (type KerasLayer).
    
    in user code:
    
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/qnn/keras.py", line 414, in call  *
            results = self._evaluate_qnode(inputs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/qnn/keras.py", line 437, in _evaluate_qnode  *
            res = self.qnode(**kwargs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/workflow/qnode.py", line 1164, in __call__  *
            return self._impl_call(*args, **kwargs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/workflow/qnode.py", line 1144, in _impl_call  *
            self.construct(args, kwargs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/logging/decorators.py", line 958, in wrapper_entry  *
            return func(*args, **kwargs)
        File "/Users/lucas/Quantum_Finance_real/.venv/lib/python3.10/site-packages/pennylane/workflow/qnode.py", line 966, in construct  *
            self._qfunc_output = self.func(*args, **kwargs)
        File "/Users/lucas/Quantum_Finance_real/Improved_code/Secondary/AA_Qubit_Models.py", line 149, in circuit  *
            return tf.cast(expectations, tf.float32)
        File "tensorflow/python/framework/fast_tensor_util.pyx", line 132, in tensorflow.python.framework.fast_tensor_util.AppendObjectArrayToTensorProto
            
    
        TypeError: Expected binary or unicode string, got probs(Z(0))

I have also tried to run a different device like default.qubits or even default.qutrits but I still get the same error. I have also tried functions like qml.probs but I still get the same error. When looking online and in the pennylane tutorials the function qml.expval seems to return a float.

Is there something I can do to circumvent this issue?
Thank you in advance for the help.

Kind regads, Lucas

Hi @Lucas , welcome to the Forum!

My guess is that the issue is being caused by the processing you’re doing to the expectation values. QNodes expect you to return a measurement and the handle the integration with TensorFlow so you don’t need to convert anything to a TF float. Eg. you can return [qml.expval(qml.PauliZ(qubit) for qubit in range(qubits)]

On the other hand, I cannot replicate your issue because the code you shared is missing some information (Input_procceeding and Alternating).

Could you please share a minimal reproducible example so that we can try to replicate your problem? Please open a new topic with your new code since your issue isn’t related to postselection which is the topic for this thread. You can link to your original post in the new topic.
Let me know if you have any questions on how to do this.

I hope this helps!