DeviceError: Operation StatePrep cannot be used after other Operations have already been applied on a default.qubit.autograd device and Loss is Nan values

n_qubits =8 

dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def circuit(weights, x=None):
    for i in range(2):
        AmplitudeEmbedding(x, wires = range(n_qubits), normalize=True)
        StronglyEntanglingLayers(weights[i], wires = range(n_qubits))
    return qml.expval(qml.PauliZ(0))

I want to repeat the circuit. But it gives an error DeviceError: Operation StatePrep cannot be used after other Operations have already been applied on a default.qubit.autograd device. Please help.

although if i use MottonenStatePreparation in place of amplitude. It works and slower :frowning:

Anyway, How can I use amplitude or any other efficient way


Also, when i use MottonenStatePreparation
the loss starts becoming Nan in between model training

dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev, diff_method="adjoint")
def circuit(weights, x=None):
    

    norms = np.linalg.norm(x, keepdims=True)
    x = x / norms
   
    for i in range(blocks):
        
        if i % 2 == 0:
            #AmplitudeEmbedding(x, wires = range(n_qubits), normalize=True)
            qml.MottonenStatePreparation(x, wires = range(n_qubits) )
        StronglyEntanglingLayers(weights[i], wires = range(n_qubits))
        
    return qml.expval(qml.PauliZ(0))

blocks=3
layers=2
weights = [np.random.random(size=(layers, n_qubits, 3),  requires_grad=True) for i in range(blocks)]

num_samples = 100
num_features = 256
random_array = np.random.rand(num_samples, num_features)

# Generate random labels for the samples
labels = np.random.randint(2, size=num_samples)  # Assuming binary labels

# Divide the data into train and test sets with an 80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(random_array, labels, test_size=0.2, random_state=42)

y_train=y_train*2-1
y_test=y_test*2-1  

for it in range(5):
    
    weights, bias, _, _ = opt.step(cost, weights, bias, X_train, y_train)

    # Compute the accuracy

    predictions = [np.sign(variational_classifier(weights, bias, x)) for x in X_train]
    #print(predictions)
    acc = accuracy(y_train, predictions)
        
    if acc > abest:
        wbest = weights
        bbest = bias
        abest = acc
        print('New best')

    print(
        "Iter: {:5d} | Training Cost: {:0.7f} | TR Accuracy: {:0.7f} ".format(
            it + 1, cost(weights, bias, X_train, y_train), acc
        )
    )

I used identity block also at the beginning. but still Nan occurs when i use MottonenStatePreparation. Can you please check. Really need this to work either with amplitude or stateprep or Mottenstate. I am using Pennylane 0.33.1

Hi @Amandeep,

Can you please post a complete version of your code for both scenarios? It should include all imports and functions necessary to run your code. Also, please make sure to format your code as such by using the “<>” symbol. First click on this symbol in an empty line and then paste your code in between the backticks that appear.
I strongly recommend that you watch our video on how to make great Forum posts. It will make it easier for us to help you!