Numpy says "probabilities do not sum to 1"

Hey snuffkin!

You uncovered a very interesting bug there!

What seems to happen is that the conversion from tf tensors to numpy arrays creates an ndarray of data type float32. I suspect that with this lower precision, the automatic normalisation of the AmplitudeEmbedding method creates a quantum state vector which does not pass the checks of numpy’s np.random.choice (which is used for sampling measurement results in "default.qubit") for a probability distribution to be normalised.

As a fix, you’d need to convert your array to float64:

n_qubits = 8
dev = qml.device("default.qubit", wires=n_qubits, shots=10, analytic=False)

# quantum circuit
@qml.qnode(dev)
def circuit(features=None):
    qml.templates.AmplitudeEmbedding(features, range(n_qubits), normalize=True)
    return qml.expval(qml.PauliZ(0))

circuit(features=x_train[0].flatten().astype('float64'))

Interestingly, at least for the latest PL version I tested this with, assigning the training data point to a new variable seems to do the conversion automatically.

n_qubits = 8
dev = qml.device("default.qubit", wires=n_qubits, shots=10, analytic=False)
x = x_train[0].flatten()
print(x.dtype) #  'float64'

# quantum circuit
@qml.qnode(dev)
def circuit(features=None):
    qml.templates.AmplitudeEmbedding(features, range(n_qubits), normalize=True)
    return qml.expval(qml.PauliZ(0))

circuit(features=x)

Hope this helps!

2 Likes