Pennylane statevector results different from qiskit statevector? issue

@josh yeah, I applied. Now it works. Thanks for solving my issue :raised_hands:

No worries @Amandeep!

@josh Is it possible to use the TensorFlow interface in PL with Qiskit based simulators?
Another thing is that can you please share any documentation of PL where the results are computed using the qasm simulator.

Hi @Amandeep, definitely :slight_smile:

Simply change the device like so,

dev = qml.device("qiskit.aer", wires=2, method="automatic")

and add TensorFlow when creating the QNode:

@qml.qnode(dev, interface="tf")

and the QASM simulator should be used as a backend when using TensorFlow. For more details, check out the PennyLane-Qiskit docs (The Aer device — PennyLane-Qiskit 0.28.0-dev documentation).

Okay.
Suppose, if we do not define the number of steps to optimize the function using PL optimizers. how can we calculate the number of times the function gets evaluated? I have seen there is an option to count the number of times the device gets evaluated. Optimization using SPSA — PennyLane

@Amandeep yes that’s correct - you can do

>>> dev.num_executions
5

to track how many times a device used by a QNode is executed :slight_smile:

@josh is it the number of times the device/simulator gets executed or the number of times the optimizer applied to evaluate the objective function?

Yes, it is the it the number of times the device/simulator gets executed. Unfortunately, the only way to extract the time of times it has been called from the optimizer is if the optimizer keeps track of that.

Which optimizer are you using?

Gradient descent and sgd using Tf interface. I need to count the optimizer evaluations for the objective function.

Hi @Amandeep,

The number of iterations can be queried by checking the iterations attribute of the optimizer object (and calling its numpy method to convert to a scalar):

import tensorflow as tf

opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)
opt.iterations.numpy()

Hope this helps, let us know if we could help further!

@antalszava Thank you for your response.

I am applying SGD as below and steps are coming like 1. I don’t want to specify the number of steps in the beginning. I need to check how many steps the optimizer takes by varying the step size.

theta=tf.Variable(theta, dtype=tf.float64)
opt = tf.keras.optimizers.SGD(learning_rate=0.001)

with tf.GradientTape() as tape:
loss = tf.abs(circuit(theta) - 0.5)**2

gradients = tape.gradient(loss, [theta])
opt.apply_gradients(zip(gradients, [theta]))
print(“steps”, opt.iterations.numpy())

Secondly, I am using a GD optimizer from Pennylane. I think “opt. iterations” will not work with GD. So getting AttributeError: ‘GradientDescentOptimizer’ object has no attribute ‘iterations’

opt = qml.GradientDescentOptimizer(stepsize=0.005)

theta = opt.step(circuit, theta)

print(“STEPS”, opt.iterations.numpy())

value=circuit(theta)

@josh

As you have seen previously in the above code, I was using the statevector simulator and qiskit plugin. I got the results using statevector = backend_statevector.run(qc).result().get_statevector(qc). It worked perfectly. Now, I am using qasm_simualtor. How can we execute the circuit on qasm_simualtor and get the result counts in PL? Is it the similar way we do in qiskit or different anything? any documentation will help. I am looking into PL, but did not find anything related to it.

Hi @amandeep,

I think “opt. iterations” will not work with GD. So getting AttributeError: ‘GradientDescentOptimizer’ object has no attribute ‘iterations’

Indeed, that only works with TensorFlow optimizers. In PennyLane, we don’t have this attribute defined.

When using PennyLane, to step the optimizer, an explicit call to the qml.GradientDescentOptimizer.step method has to be placed. Could a simple counter be defined that is incremented right after the line theta = opt.step(circuit, theta) and every other line where opt.step is called?

How can we execute the circuit on qasm_simualtor and get the result counts in PL? Is it the similar way we do in qiskit or different anything?

In PennyLane, getting counts can be done using the qml.sample measurement type with a chosen observable in the return statement of a quantum function:

import pennylane as qml

dev = qml.device('qiskit.aer', backend='qasm_simulator', wires=1, shots=10)

@qml.qnode(dev)
def circuit():
    return qml.sample(qml.PauliZ(0))

circuit()
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1])

So the observable is sampled shots=10 times after executing the quantum circuit.

The following pages could further help:

After each execution, there is a private job object as well that is accessible using the underlying Qiskit device:

dev._current_job.result().data()
{'counts': {'0x0': 10},
 'memory': ['0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0',
  '0x0']}

Overall, I would recommend using this latter option only as a last resort because this could easily break (as it’s not considered to be a user-facing feature).

Hope this helps! :slightly_smiling_face:

@antalszava Thank you for your response. I am getting the counts like
{‘counts’: {‘0x9’: 76, ‘0xa’: 60, ‘0x14’: 76, ‘0xc’: 93, ‘0x3’: 74, ‘0x6’: 87, ‘0xf’: 10
I understand I think it returns dict with hexadecimal keys. May be I am wrong. I was expecting something like {‘10100’: 1, ‘00101’: 1, ‘01111’: 1, ‘00110’: 1, ‘01101’: 1, ‘01010’: 1…
Because I need to access each key binary key value from counts for an operation. But, it throws an following error

unsupported operand type(s) for *: ‘float’ and ‘dict’

Hi @Amandeep :slightly_smiling_face:

The docstring for the relevant Qiskit object is here.

I don’t know exactly what operation you are trying to do on the keys, but from the error you’re getting, it sounds like you are applying the operation to the entire dictionary instead of an individual element.

for state, counts in data['counts']:
    # act on state here

or use a comprehension

new_dict = {f(state): counts for state, counts in data['counts'].items() }

Hope that helps.

In qiskit, the estimated outcome probabilities Pr(000) and Pr(111)are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). When we use qml.expval or qml.sample. Do they perform aggregate or we need to perform by dividing the results from both (qml.sample and qml.expval) by shots. Everything is on qasm_simulator.

ERROR: TypeError: Cannot convert value 1.0720763764999994 to a TensorFlow DType. “1.0720763764999994” it is returned as a loss.

theta=tf.Variable(theta, dtype=tf.float64)
print("theta", theta)
opt = tf.keras.optimizers.SGD(learning_rate=0.01)

step=3

    # Training process
steps = []
sdg_losses = []
for i in range(step):
    with tf.GradientTape() as tape:
        loss =  f_exp(theta)
        print(loss)
        
    
    steps.append(theta)
    sdg_losses.append(loss)

    gradients = tape.gradient(loss, [theta])
    
    opt.apply_gradients(zip(gradients, [theta]))
    print(f"Step {_+1} - Loss = {loss}")

print(f"Final cost function: {cost(theta).numpy()}\nOptimized angles: {theta.numpy()}")

I am using qasm simulator and SGD. But it works perfectly on statevector. But throws an error of Dtype

Hey @Amandeep!

When we use qml.expval or qml.sample. Do they perform aggregate or we need to perform by dividing the results from both (qml.sample and qml.expval) by shots

When measuring expectation values or calculating probabilities with a finite number of shots, PennyLane similarly infers those values from the sampled statistics. For example, see the code below:

import pennylane as qml

dev = qml.device('qiskit.aer', wires=1, backend='qasm_simulator')

@qml.qnode(dev)
def f():
    qml.RX(0.9, wires=0)
    return qml.expval(qml.PauliZ(0))

print("Exact expectation value:", f())
print("Sampled value:", f(shots=1000))

Regarding your error:

ERROR: TypeError: Cannot convert value 1.0720763764999994 to a TensorFlow DType. “1.0720763764999994” it is returned as a loss.

We can best help with a minimum example of the code leading to this error.

@antalszava could you please how we can use opt.iterations with TF SGD optimizers. I have used but it displays 1. Please check the code in message.

Hi @amandeep,

It looks like you’ve created a separate post for this same specific question here? If so, thanks, that will help us answer easier, and we’ll take a look and respond in that post.