Pennylane statevector results different from qiskit statevector? issue

@antalszava Thank you for your response. I am getting the counts like
{‘counts’: {‘0x9’: 76, ‘0xa’: 60, ‘0x14’: 76, ‘0xc’: 93, ‘0x3’: 74, ‘0x6’: 87, ‘0xf’: 10
I understand I think it returns dict with hexadecimal keys. May be I am wrong. I was expecting something like {‘10100’: 1, ‘00101’: 1, ‘01111’: 1, ‘00110’: 1, ‘01101’: 1, ‘01010’: 1…
Because I need to access each key binary key value from counts for an operation. But, it throws an following error

unsupported operand type(s) for *: ‘float’ and ‘dict’

Hi @Amandeep :slightly_smiling_face:

The docstring for the relevant Qiskit object is here.

I don’t know exactly what operation you are trying to do on the keys, but from the error you’re getting, it sounds like you are applying the operation to the entire dictionary instead of an individual element.

for state, counts in data['counts']:
    # act on state here

or use a comprehension

new_dict = {f(state): counts for state, counts in data['counts'].items() }

Hope that helps.

In qiskit, the estimated outcome probabilities Pr(000) and Pr(111)are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). When we use qml.expval or qml.sample. Do they perform aggregate or we need to perform by dividing the results from both (qml.sample and qml.expval) by shots. Everything is on qasm_simulator.

ERROR: TypeError: Cannot convert value 1.0720763764999994 to a TensorFlow DType. “1.0720763764999994” it is returned as a loss.

theta=tf.Variable(theta, dtype=tf.float64)
print("theta", theta)
opt = tf.keras.optimizers.SGD(learning_rate=0.01)

step=3

    # Training process
steps = []
sdg_losses = []
for i in range(step):
    with tf.GradientTape() as tape:
        loss =  f_exp(theta)
        print(loss)
        
    
    steps.append(theta)
    sdg_losses.append(loss)

    gradients = tape.gradient(loss, [theta])
    
    opt.apply_gradients(zip(gradients, [theta]))
    print(f"Step {_+1} - Loss = {loss}")

print(f"Final cost function: {cost(theta).numpy()}\nOptimized angles: {theta.numpy()}")

I am using qasm simulator and SGD. But it works perfectly on statevector. But throws an error of Dtype

Hey @Amandeep!

When we use qml.expval or qml.sample. Do they perform aggregate or we need to perform by dividing the results from both (qml.sample and qml.expval) by shots

When measuring expectation values or calculating probabilities with a finite number of shots, PennyLane similarly infers those values from the sampled statistics. For example, see the code below:

import pennylane as qml

dev = qml.device('qiskit.aer', wires=1, backend='qasm_simulator')

@qml.qnode(dev)
def f():
    qml.RX(0.9, wires=0)
    return qml.expval(qml.PauliZ(0))

print("Exact expectation value:", f())
print("Sampled value:", f(shots=1000))

Regarding your error:

ERROR: TypeError: Cannot convert value 1.0720763764999994 to a TensorFlow DType. “1.0720763764999994” it is returned as a loss.

We can best help with a minimum example of the code leading to this error.

@antalszava could you please how we can use opt.iterations with TF SGD optimizers. I have used but it displays 1. Please check the code in message.

Hi @amandeep,

It looks like you’ve created a separate post for this same specific question here? If so, thanks, that will help us answer easier, and we’ll take a look and respond in that post.