Batching gives the same output

Hello, I’m trying to do batch generation of probability distributions, but I get the exact same output every time. I’m not sure if it’s a problem with my implementation or I’m not getting this right conceptually. I also added a simple measurement error noise model in hopes that it would provide stochasticity to the model’s behavior but they all return the same value.

import pennylane as qml
import numpy as np
import torch
from torch.autograd import Variable

num_qubits = 3

def rotation_layer(w):
    for i in range(num_qubits):
        qml.RY(w[i], wires=i)

def entangling_block(w):
    for i in range(num_qubits):
        qml.CZ(wires = [i, (i+1)%num_qubits])

dev = qml.device("default.mixed", wires = 3)

@qml.qnode(dev, interface='torch')
def generator(w, num_qubits, num_layers = 3):
    
    rotation_layer(w[:num_qubits])
    for i in range(1, num_layers*2 + 1, 2):
        entangling_block(w[num_qubits * (i) : num_qubits * (i+1)])
        rotation_layer(w[num_qubits * (i+1) : num_qubits * (i+2)])
        
    for i in range(num_qubits): # measurement error
        qml.BitFlip(0.3, wires = i)

    return qml.probs(wires=range(num_qubits))

num_layers = 3
batch_size = 5
params = np.random.normal(0, np.pi, size = (num_layers * 2 + 1) * num_qubits) #randomly initialized params
batch = np.repeat(params, 5).reshape(-1, 5) # I want the model to do its generation with the same parameters

last = generator(batch, num_qubits)
for i in range(100):
    cur = generator(batch, num_qubits)
    if np.sum(np.abs(last - cur)) != 0:
        print("flip") # will print if the last batch is different from the current batch
    if not all((x == cur[0]).all() for x in cur):
        print("flip") # will print if the outputs within a batch are different

Here is the output of qml.about().

Name: PennyLane
Version: 0.28.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /opt/conda/envs/pennylane/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, retworkx, scipy, semantic-version, toml
Required-by: PennyLane-Cirq, PennyLane-Lightning, PennyLane-qiskit, pennylane-qulacs, PennyLane-SF

Platform info:           Linux-5.4.209-116.367.amzn2.x86_64-x86_64-with-glibc2.31
Python version:          3.9.15
Numpy version:           1.23.5
Scipy version:           1.10.0
Installed devices:
- default.gaussian (PennyLane-0.28.0)
- default.mixed (PennyLane-0.28.0)
- default.qubit (PennyLane-0.28.0)
- default.qubit.autograd (PennyLane-0.28.0)
- default.qubit.jax (PennyLane-0.28.0)
- default.qubit.tf (PennyLane-0.28.0)
- default.qubit.torch (PennyLane-0.28.0)
- default.qutrit (PennyLane-0.28.0)
- null.qubit (PennyLane-0.28.0)
- cirq.mixedsimulator (PennyLane-Cirq-0.28.0)
- cirq.pasqal (PennyLane-Cirq-0.28.0)
- cirq.qsim (PennyLane-Cirq-0.28.0)
- cirq.qsimh (PennyLane-Cirq-0.28.0)
- cirq.simulator (PennyLane-Cirq-0.28.0)
- lightning.qubit (PennyLane-Lightning-0.28.2)
- strawberryfields.fock (PennyLane-SF-0.20.1)
- strawberryfields.gaussian (PennyLane-SF-0.20.1)
- strawberryfields.gbs (PennyLane-SF-0.20.1)
- strawberryfields.remote (PennyLane-SF-0.20.1)
- strawberryfields.tf (PennyLane-SF-0.20.1)
- qiskit.aer (PennyLane-qiskit-0.28.0)
- qiskit.basicaer (PennyLane-qiskit-0.28.0)
- qiskit.ibmq (PennyLane-qiskit-0.28.0)
- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.28.0)
- qiskit.ibmq.sampler (PennyLane-qiskit-0.28.0)
- qulacs.simulator (pennylane-qulacs-0.28.0)

Hi @jkwan314 and thanks for your post.

If you are looking for stochastic behavior, I would recommend setting finite shots on the device,

dev = qml.device("default.mixed", wires = 3, shots=100)

and disabling caching:

@qml.qnode(dev, interface='torch', cache=False)

Since the execution was occurring with shots=None, the simulation was fully deterministic and being performed with the same parameters. To take advantage of the finite shots and get a different result for each circuit in the batch, caching needs to be disabled.

3 Likes