# Pennylane statevector results different from qiskit statevector? issue

@qml.qnode(dev)
def construct_circuit(theta):
qubits=5
wl=[[0, 4, 1.32307045],
[1, 3, 1.3663254999999999],
[1, 4, 1.3517296749999999],
[2, 3, 1.30759795],
[1, 0, 1.325773325],
[2, 4, 1.3012902],
[2, 0, 1.273531925],
[3, 4, 1.390962075],
[2, 1, 1.2958350749999998],
[3, 0, 1.324304475]]
ws=[-0.6845224999999999,
-2.3958211,
-0.49777962500000017,
-0.9059359499999997,
-1.2217257499999996]

n= len(theta)
#print("len",n)
# apply Hadamards to get the n qubit |+> state
for i in range(qubits):

j=0
while j<n:
for l in range(len(wl)):
qubit1 = wl[l][0]
qubit2 = wl[l][1]
w = wl[l][2]

qml.CNOT(wires=[qubit1, qubit2])
qml.RZ(-2.0*theta[j]*w, wires=qubit2)
qml.CNOT(wires=[qubit1, qubit2])

for id in range(qubits):
qml.RZ(-2.0*theta[j]*ws[id], wires=id)

j+=1
if (j<n):
for id in range(qubits):
qml.RX(2.0*theta[j], wires=id)
j+=1
return qml.expval(qml.PauliZ(0))

dev = qml.device('qiskit.aer', wires=5, backend='statevector_simulator')
theta=[0.25 0.25]
def f_statevector(theta):
construct_circuit(theta)
statevector=dev._state
print("s[0]",statevector[0])
print("s[1]",statevector[1])
print("s[2]",statevector[2])
print("s[3]",statevector[3])


In Qiskit

theta=[0.25 0.25]
def construct_circuit(theta,measurement):
global qubits, wl, ws

#print("weightlist", wl)
#print("WS", ws)

if measurement==1:
qc=QuantumCircuit(qubits,qubits)
else:
qc=QuantumCircuit(qubits)

n= len(theta)

for i in range(qubits):
qc.h(i)

j=0
while j<n:
for l in range(len(wlist)):
qc.cx(wl[l][0],wl[l][1])
qc.rz(-2.0*theta[j]*wl[l][2],wl[l][1])
qc.cx(wl[l][0],wl[l][1])

for i in range(qubits):
qc.rz(-2.0*theta[j]*ws[i],i)

j+=1
if (j<n):
for i in range(qubits):
qc.rx(2.0*theta[j],i)
j+=1

return qc

backend_statevector = Aer.get_backend('statevector_simulator')
theta=[0.25 0.25]
def f_statevector(theta):
qc=construct_circuit(theta,0)
statevector=execute(qc,backend_statevector).result().get_statevector(qc)
print("s[0]",statevector[0])
print("s[1]",statevector[1])
print("s[2]",statevector[2])
print("s[3]",statevector[3])


Pennylane output…

s[0] (-0.12770500535333973+0.06192660660446593j)
s[1] (-0.012925467925004692-0.06881639484412187j)
s[2] (-0.028443830841366216-0.07524954579321873j)
s[3] (-0.004259676780593769-0.2712211586908369j)


Qiskit output…

s[0] (-0.12770500535333973+0.06192660660446593j)
s[1] (-0.0349974104403304-0.0685326298203696j)
s[2] (0.014842605839478005-0.04481278734342238j)
s[3] (0.10069490106478653-0.24025068613592362j)


Although, I am getting the state vector similar to qiskit, but there are at different positions in statevector array using dev.state. Is it possible that statevector should be in same order as qiskit

Hi @Amandeep!

It’s hard to debug this fully without your full program (in particular, I note that the variables theta, wl, qubits, and ws are not defined above!)

Would you be able to post a version of your code that is minimal – that is, as much of the code has been removed as possible to showcase the error?

I had a go locally attempting to recreate this issue, but unfortunately was not able to:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device('qiskit.aer', wires=2, backend='statevector_simulator')

init_state = np.array([0.1, 0.2, 0.3, 0.4], requires_grad=False)
init_state /= np.linalg.norm(init_state)

@qml.qnode(dev)
def construct_circuit():
qml.QubitStateVector(init_state, wires=[0, 1])
return qml.expval(qml.PauliZ(0))

construct_circuit()
print("PennyLane statevector:", dev._state)

import qiskit

qc = qiskit.QuantumCircuit(2)
qc.initialize(init_state.numpy(), [0, 1])

backend_statevector = qiskit.Aer.get_backend('statevector_simulator')
statevector = backend_statevector.run(qc).result().get_statevector(qc)
print("Qiskit statevector:", statevector)


This gives output

PennyLane statevector: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j]
Qiskit statevector: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j]


I have edited the post with values of wl and ws. The issue is that the values in PL statevector are getting shuffled. Although, the values in both statevectors returned by qiskit and Pl are same.

@josh Thank you for your prompt response. I updated the code. Please check if it works.

Thanks @Amandeep!

I had a look, and this is to be expected. It is because PennyLane and Qiskit use different conventions for the underlying statevector ordering.

For more details, see here:

In this particular case, if you want to get the Qiskit results to match the PennyLane results, you can simply reshape and transpose the statevector:


def f_statevector(theta):
qc = construct_circuit(theta, 0)
statevector = backend_statevector.run(qc).result().get_statevector(qc)
statevector = statevector.reshape([2] * qubits).T.flatten()


@josh I am looking in the opposite way that how the PL results can match the Qiskit results

No worries @Amandeep! It is in fact the same operation:

def f_statevector(theta):
construct_circuit(theta)
statevector = dev.state.reshape([2] * qubits).T.flatten()


@josh yeah, I applied. Now it works. Thanks for solving my issue

No worries @Amandeep!

@josh Is it possible to use the TensorFlow interface in PL with Qiskit based simulators?
Another thing is that can you please share any documentation of PL where the results are computed using the qasm simulator.

Hi @Amandeep, definitely

Simply change the device like so,

dev = qml.device("qiskit.aer", wires=2, method="automatic")


and add TensorFlow when creating the QNode:

@qml.qnode(dev, interface="tf")


and the QASM simulator should be used as a backend when using TensorFlow. For more details, check out the PennyLane-Qiskit docs (The Aer device — PennyLane-Qiskit 0.28.0-dev documentation).

Okay.
Suppose, if we do not define the number of steps to optimize the function using PL optimizers. how can we calculate the number of times the function gets evaluated? I have seen there is an option to count the number of times the device gets evaluated. Optimization using SPSA — PennyLane

@Amandeep yes that’s correct - you can do

>>> dev.num_executions
5


to track how many times a device used by a QNode is executed

@josh is it the number of times the device/simulator gets executed or the number of times the optimizer applied to evaluate the objective function?

Yes, it is the it the number of times the device/simulator gets executed. Unfortunately, the only way to extract the time of times it has been called from the optimizer is if the optimizer keeps track of that.

Which optimizer are you using?

Gradient descent and sgd using Tf interface. I need to count the optimizer evaluations for the objective function.

Hi @Amandeep,

The number of iterations can be queried by checking the iterations attribute of the optimizer object (and calling its numpy method to convert to a scalar):

import tensorflow as tf

opt = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)
opt.iterations.numpy()


Hope this helps, let us know if we could help further!

@antalszava Thank you for your response.

I am applying SGD as below and steps are coming like 1. I don’t want to specify the number of steps in the beginning. I need to check how many steps the optimizer takes by varying the step size.

theta=tf.Variable(theta, dtype=tf.float64)
opt = tf.keras.optimizers.SGD(learning_rate=0.001)

loss = tf.abs(circuit(theta) - 0.5)**2

print(“steps”, opt.iterations.numpy())

Secondly, I am using a GD optimizer from Pennylane. I think “opt. iterations” will not work with GD. So getting AttributeError: ‘GradientDescentOptimizer’ object has no attribute ‘iterations’

theta = opt.step(circuit, theta)

print(“STEPS”, opt.iterations.numpy())

value=circuit(theta)

@josh

As you have seen previously in the above code, I was using the statevector simulator and qiskit plugin. I got the results using statevector = backend_statevector.run(qc).result().get_statevector(qc). It worked perfectly. Now, I am using qasm_simualtor. How can we execute the circuit on qasm_simualtor and get the result counts in PL? Is it the similar way we do in qiskit or different anything? any documentation will help. I am looking into PL, but did not find anything related to it.

Hi @amandeep,

I think “opt. iterations” will not work with GD. So getting AttributeError: ‘GradientDescentOptimizer’ object has no attribute ‘iterations’

Indeed, that only works with TensorFlow optimizers. In PennyLane, we don’t have this attribute defined.

When using PennyLane, to step the optimizer, an explicit call to the qml.GradientDescentOptimizer.step method has to be placed. Could a simple counter be defined that is incremented right after the line theta = opt.step(circuit, theta) and every other line where opt.step is called?

How can we execute the circuit on qasm_simualtor and get the result counts in PL? Is it the similar way we do in qiskit or different anything?

In PennyLane, getting counts can be done using the qml.sample measurement type with a chosen observable in the return statement of a quantum function:

import pennylane as qml

dev = qml.device('qiskit.aer', backend='qasm_simulator', wires=1, shots=10)

@qml.qnode(dev)
def circuit():
return qml.sample(qml.PauliZ(0))

circuit()

array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1])


So the observable is sampled shots=10 times after executing the quantum circuit.

The following pages could further help:

After each execution, there is a private job object as well that is accessible using the underlying Qiskit device:

dev._current_job.result().data()

{'counts': {'0x0': 10},
'memory': ['0x0',
'0x0',
'0x0',
'0x0',
'0x0',
'0x0',
'0x0',
'0x0',
'0x0',
'0x0']}


Overall, I would recommend using this latter option only as a last resort because this could easily break (as it’s not considered to be a user-facing feature).

Hope this helps!