Noise Simulation on Different Backends

Hello! I was looking forward to run some tests executing the same kernel on different backends. The purpose is to compare the ideal performance of a quantum feature extractor to noisy simulations on simulated QPUs. I don’t really want to run code on simulators (like the ones available on AWS, Rigetti cloud etc…), I was mostly looking into injecting noise using “ready-made” real qpus noise profiles in my kernel to conduct a systematic analysis without having to potentially pay. Ideally, I would like to treat the backends as hyperparameters, reusing the same code with different simulated noisy backends and setting them using config files.
The dummy kernel i have coded is meant to encode channel informations from an EEG using the amplitude encoding and iteratively encode timestep informations after applying a strongly entanglement template with random weights. The eeg is cut in small sliding windows, which should be executed in parallel batches just like we would do in pytorch. Here’s the code:

import pennylane as qml
import torch
from math import log2, pi
import logging
import os

# Logging configuration
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)

# -----------------------------------------------------------------------------
# 1. Pre-processing and Post-processing Functions (Data Logic Side)
# -----------------------------------------------------------------------------

def preprocess_data(raw_eeg_batch: torch.Tensor, sliding_window_length: int) -> tuple[torch.Tensor, int, int]:
    """
    Prepares EEG data: trims, windows, and flattens the batch for the quantum circuit.
    """
    batch_size = raw_eeg_batch.shape[0]
    total_timesteps = raw_eeg_batch.shape[1]
    n_channels = raw_eeg_batch.shape[2]

    log.info(f"Preprocessing: Input shape {raw_eeg_batch.shape}")

    # Calculate how many full windows fit
    n_windows = total_timesteps // sliding_window_length
    
    # Calculate remainder and trim excess data
    timesteps_to_trim = total_timesteps % sliding_window_length
    valid_length = total_timesteps - timesteps_to_trim
    
    if timesteps_to_trim > 0:
        trimmed_tensor = raw_eeg_batch[:, :valid_length, :]
    else:
        trimmed_tensor = raw_eeg_batch

    # Reshape into windows: [Batch, N_Windows, Window_Len, Channels]
    windowed_tensor = trimmed_tensor.view(
        batch_size, n_windows, sliding_window_length, n_channels
    )

    # Flattening: [Batch * N_Windows, Window_Len, Channels]
    # This is the tensor that will enter the quantum circuit
    circuit_input = windowed_tensor.reshape(-1, sliding_window_length, n_channels)
    
    log.info(f"Preprocessing: Circuit input shape {circuit_input.shape}")
    
    return circuit_input, batch_size, n_windows

def postprocess_data(qnode_output: torch.Tensor, batch_size: int, n_windows: int, n_qubits: int) -> torch.Tensor:
    """
    Reconstructs the original shape from circuit results.
    PennyLane with Torch interface usually returns [n_qubits, batch_total] or [batch_total, n_qubits]
    depending on version/config. Here we handle the standard stack.
    """
    # If output is a list of tensors (one per qubit), stack them
    if isinstance(qnode_output, list):
        stacked_output = torch.stack(qnode_output) # [n_qubits, total_batch_size]
        transposed_output = stacked_output.T       # [total_batch_size, n_qubits]
    else:
        # If it is already a tensor)
        transposed_output = qnode_output

    log.info(f"Postprocessing: Raw output shape {transposed_output.shape}")

    # Final reshape: [Batch, N_Windows, Features (Qubits)]
    reassembled_tensor = transposed_output.reshape(batch_size, n_windows, n_qubits)
    
    log.info(f"Postprocessing: Final shape {reassembled_tensor.shape}")
    return reassembled_tensor

# -----------------------------------------------------------------------------
# 2. Quantum Kernel (Pure Circuit Logic)
# -----------------------------------------------------------------------------

class AmplitudeReUploadKernel:
    """
    Contains the circuit definition, weights, and device setup.
    """
    def __init__(self, n_channels, sliding_window_length):
        self.n_channels = n_channels
        self.n_qubits = round(log2(n_channels))
        self.sliding_window_length = sliding_window_length
        
        # Initialize random weights for parametrized layers
        # Shape: [Window_Len, Layers=1, Qubits, Axis=3]
        self.weights = torch.rand(self.sliding_window_length, 1, self.n_qubits, 3) * 2 * pi

        # Device setup
        self._setup_device()
        
        # QNode setup
        self.qnode = self._create_qnode()

    def _setup_device(self):
        # Example using IonQ simulator
        self.dev = qml.device("ionq.simulator", wires=6, shots=1000)

    def _create_qnode(self):
        @qml.qnode(self.dev, interface="torch")
        def circuit(inputs, weights):
            """
            inputs shape: [batch_size, sliding_window_length, n_channels]
            The input is 'implicitly' batched by PennyLane/Torch.
            We iterate over the time dimension (sliding_window_length).
            """
            # Data Re-uploading Loop
            for t in range(self.sliding_window_length):
                # 1. Encoding (Amplitude Embedding)
                # inputs[:, t, :] takes the t-th time slice for the whole batch
                qml.AmplitudeEmbedding(features=inputs[:, t, :], wires=range(self.n_qubits), normalize=True, pad_with=0.)
                
                # 2. Processing (Variational Layers)
                qml.StronglyEntanglingLayers(weights[t], wires=range(self.n_qubits))
                
                # Visual barrier (optional, useful for visual debugging)
                # qml.Barrier(wires=range(self.n_qubits))

            return [qml.expval(qml.PauliZ(i)) for i in range(self.n_qubits)]
        
        # Using broadcast_expand for batch execution if needed
        return qml.transforms.broadcast_expand(circuit)

    def forward(self, x):
        """Executes the circuit"""
        return self.qnode(x, self.weights)


# -----------------------------------------------------------------------------
# 3. Main Execution Workflow
# -----------------------------------------------------------------------------

if __name__ == "__main__":
    
    # --- Configuration ---
    CONFIG = {
        'n_channels': 64,
        'sliding_window_length': 4,  
        'batch_size': 16,            # Realistic batch size
        'signal_len': 128,           # Time signal length
    }

    # 1. Dummy Data Generation (Batch, Time, Channels)
    raw_eeg = torch.rand(CONFIG['batch_size'], CONFIG['signal_len'], CONFIG['n_channels'])
    
    # 2. Preprocessing (In Main)
    processed_input, original_bs, n_wins = preprocess_data(
        raw_eeg, 
        CONFIG['sliding_window_length']
    )

    # 3. Kernel Initialization
    kernel = AmplitudeReUploadKernel(
        n_channels=CONFIG['n_channels'],
        sliding_window_length=CONFIG['sliding_window_length'],
    )

    # 4. Circuit Execution (Benchmark here)
    log.info("Starting quantum circuit execution...")
    quantum_output = kernel.forward(processed_input)

    # 5. Postprocessing (In Main)
    final_output = postprocess_data(
        quantum_output, 
        batch_size=original_bs, 
        n_windows=n_wins, 
        n_qubits=kernel.n_qubits
    )

    print(f"\nFinal Result:")
    print(f"Input Shape: {raw_eeg.shape}")
    print(f"Output Shape: {final_output.shape} (Batch, Windows, Qubits)")

in this case, i was testing the IonQ plugin to see whether it worked. It seems to be working, but if I am right there is no way to make this simulation noisy, am I right?

I’ve tried rigetti too, but I have seen that it has been deprecated since version 0.40.0. I have attached the error anyway. code is the same, the only different thing is of course the device which i set to self.dev = qml.device("rigetti.qvm", device="6q", noisy=True)

Traceback (most recent call last):
  File "/home/lollo/Desktop/eeg-attencion/backends/rigetti.py", line 151, in <module>
    kernel = AmplitudeReUploadKernel(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/Desktop/eeg-attencion/backends/rigetti.py", line 90, in __init__
    self._setup_device()
  File "/home/lollo/Desktop/eeg-attencion/backends/rigetti.py", line 96, in _setup_device
    self.dev = qml.device("rigetti.qvm", device="6q", noisy=True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/Desktop/eeg-attencion/backends/.venv/lib/python3.11/site-packages/pennylane/devices/device_constructor.py", line 244, in device
    plugin_device_class = plugin_devices[name].load()
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/importlib/metadata/__init__.py", line 202, in load
    module = import_module(match.group('module'))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/.local/share/uv/python/cpython-3.11.13-linux-x86_64-gnu/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/lollo/Desktop/eeg-attencion/backends/.venv/lib/python3.11/site-packages/pennylane_rigetti/__init__.py", line 7, in <module>
    from .qpu import QPUDevice
  File "/home/lollo/Desktop/eeg-attencion/backends/.venv/lib/python3.11/site-packages/pennylane_rigetti/qpu.py", line 26, in <module>
    from pennylane.measurements import Expectation
ImportError: cannot import name 'Expectation' from 'pennylane.measurements' (/home/lollo/Desktop/eeg-attencion/backends/.venv/lib/python3.11/site-packages/pennylane/measurements/__init__.py)

I’m on Pennylane 0.43.1

Name: pennylane
Version: 0.43.1
Location: /home/lollo/Desktop/eeg-attencion/backends/.venv/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: pennylane-ionq, pennylane-lightning, pennylane-rigetti

I’m planning to study the docs pennylane-qiskit and pytket-pennylane (for Quantinuum QPUs simulations) too, but right now i’m a bit swamped. Am I on the right track or there’s something i’m missing?
Thank you for your help!

EDIT: I have removed strawberry fields as it is stated that it is not compatible with my Pennylane version.

Hi @LM98 ,

I can share a few pointers but unfortunately it’s not great news.

I haven’t seen many quantum hardware providers sharing “ready-made” real qpu noise profiles. They may exist out there but you may need to do some digging, and it’s hard to know how realistic they will actually be.

The ionq.simulator is indeed a noiseless simulator (more details on the devices page).

You could try using the pennylane-qrack plugin. The qrack.simulator device can be used for noisy simulations, so maybe it can help you do what you need. Here’s a demo that uses that device for reference.

You could also try using the pennylane-qiskit plugin, which allows you to use devices such as qiskit.remote and qiskit.aer. I’m adding a code example below which shows you how to use both devices, and how to access both real backends and fake backends. According to the Qiskit docs “fake backends are built to mimic the behaviors of IBM Quantum systems using system snapshots”. Note however that these snapshots may be very old so I would recommend checking their docs to get more info on these fake backends.

Below I show how to use FakeManilaV2. Just make sure to change the number of wires to 5 since this is the number of qubits for that backend.

import pennylane as qml
from qiskit_ibm_runtime.fake_provider import FakeManilaV2
from qiskit_ibm_runtime import QiskitRuntimeService

token = "<add your IBM API token here>" # Your token is confidential. Do not share your key in public code.
crn = "<add your IBM Cloud crn here>" # IBM Cloud CRN or instance name.

service = QiskitRuntimeService.save_account(
  token=token,
  instance=crn,
  set_as_default=True, # Optionally set these as your default credentials.
  overwrite=True # Set to True if you run this cell more than once
)

# Load the default credentials.
service = QiskitRuntimeService()

SHOTS = 2

# Choose a backend
# backend = FakeManilaV2()
backend = service.backend(name= 'ibm_fez')

# Use qiskit.remote
# dev = qml.device("qiskit.remote", wires=156, backend=backend) # Set the number of qubits according to the device

# Alternatively use the local qiskit.aer simulator
dev = qml.device("qiskit.aer", wires=3)

# Write your PennyLane circuit
@qml.set_shots(SHOTS)
@qml.qnode(dev)
def test_circuit():
    qml.Hadamard(0)
    qml.CNOT(wires=[0,1])
    return qml.probs(wires=[0, 1])

# And run it on the device of your choice!
print(test_circuit())

Regarding your other question, the Rigetti device has indeed been deprecated so I don’t think you can use it anymore. If you really really need it, you can try to make a fork of the plugin and change the code but unfortunately we won’t be able to support you with this endeavour and it’s probably going to be a lot of work. Unless you’re an experienced software developer it’s likely not worth it.

I hope these options can help you in your project. There’s no obvious answer and there’s a lot of complexity in your project so if you struggle with the next steps I would recommend breaking the problem into small easy pieces and try to go at them one by one.

I hope this helps!

Hello Catalina, thanks for your answer!

I think for now this setup might be good enough for now! I was also looking into the Pennylane-Braket plugin, as there is the possibility of importing the latest calibration data for some QPUs and use them to build ad-hoc noise models for each of the QPUs I might need to include in my analysis. However, it was not very clear to me the grade of interoperability between the Braket noise models and the Pennylane ones. From this GitHub issue, it seems that a noise model built with Braket can be used on a Pennylane device, but from my tests it seems that not all the types of noises are supported.

this is the code I ran

from braket.aws import AwsDevice, AwsSession
from braket.circuits import Gate
from braket.circuits.noise_model import GateCriteria, NoiseModel, ObservableCriteria
from braket.circuits.noises import (
    BitFlip,
    Depolarizing,
    TwoQubitDepolarizing,
)
import pennylane as qml
from boto3 import Session

boto3_session = Session(profile_name="quantum", region_name="us-west-1")
braket_session = AwsSession(boto_session=boto3_session)

rigetti = AwsDevice("arn:aws:braket:us-west-1::device/qpu/rigetti/Ankaa-3", aws_session=braket_session)

assert rigetti.properties.standardized.oneQubitProperties, "missing one qubit properties for Rigetti"
assert rigetti.properties.standardized.twoQubitProperties, "missing two qubit properties for Rigetti" 

noise_model = NoiseModel()
for qubit, data in rigetti.properties.standardized.oneQubitProperties.items():
    try:
        readout_error = 1 - data.oneQubitFidelity[2].fidelity  # readout
        noise_model.add_noise(BitFlip(readout_error), ObservableCriteria(qubits=int(qubit)))

        depolarizing_rate = 1 - data.oneQubitFidelity[1].fidelity
        noise_model.add_noise(Depolarizing(probability=depolarizing_rate), GateCriteria(qubits=qubit))
    except Exception as e:
        pass

for qubit_pair, data in rigetti.properties.standardized.twoQubitProperties.items():
    q0, q1 = (int(s) for s in qubit_pair.split("-"))
    try:
        if data.twoQubitGateFidelity[0].gateName == "ISWAP":
            phase_rate = 1 - data.twoQubitGateFidelity[0].fidelity
            noise_model.add_noise(
                TwoQubitDepolarizing(phase_rate),
                GateCriteria(Gate.ISwap, [(q0, q1), (q1, q0)]),
            )
    except Exception as e:
        pass

dev = qml.device("braket.local.qubit", parallel=False, braket="braket_dm", wires=2, noise_model=noise_model)

@qml.qnode(dev)
def circuit():
    # np.random.rand() uniformly samples from [0, 1)
    qml.Hadamard(wires=0)
    qml.CNOT(wires=[0,1]) 
    return qml.expval(qml.PauliZ(0))

circuit()

I used this tutorial as reference. The error I get is

Traceback (most recent call last):
  File "/home/lollo/Desktop/eeg-attencion/noise_factory/main.py", line 47, in <module>
    dev = qml.device("braket.local.qubit", parallel=False, braket="braket_dm", wires=2, noise_model=noise_model)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/Desktop/eeg-attencion/noise_factory/.venv/lib/python3.11/site-packages/pennylane/devices/device_constructor.py", line 264, in device
    dev = plugin_device_class(*args, **options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lollo/Desktop/eeg-attencion/noise_factory/.venv/lib/python3.11/site-packages/braket/pennylane_plugin/braket_device.py", line 1115, in __init__
    super().__init__(wires, device, shots=shots, **run_kwargs)
  File "/home/lollo/Desktop/eeg-attencion/noise_factory/.venv/lib/python3.11/site-packages/braket/pennylane_plugin/braket_device.py", line 177, in __init__
    self._validate_noise_model_support()
  File "/home/lollo/Desktop/eeg-attencion/noise_factory/.venv/lib/python3.11/site-packages/braket/pennylane_plugin/braket_device.py", line 592, in _validate_noise_model_support
    raise ValueError(
ValueError: StateVectorSimulator does not support noise or the noise model includes noise that is not supported by StateVectorSimulator.

I suspected the problem was the TwoQubitDepolarizing noise type, as in the list of Pennylane noise channels in the docs there’s the DepolarizingChannel and BitFlip, but commenting out the two qubit properties loop ended up in the same error. Here’s the Pennylane info again:

Name: pennylane
Version: 0.43.1
Location: /home/lollo/Desktop/eeg-attencion/noise_factory/.venv/lib/python3.11/site-packages
Requires: appdirs, autograd, autoray, cachetools, diastatic-malt, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, tomlkit, typing-extensions
Required-by: amazon-braket-pennylane-plugin, pennylane-lightning

Do you know more about it? Am I doing something wrong?

Hi @LM98 ,

From the error message it looks like there is some noise that it not supported by the device. It’s hard to tell what noise it is so I would first recommend removing all noise and slowly adding one by one to see which one is causing the problem.

You could also try running the code in the GitHub issue that you found since it may be a good reference test.

Just make sure you know the costs of running the code that you decide to use for testing.

I’m sorry I don’t have better info, it’s hard to debug issues when external devices are involved.

Thank you for your help Catalina!

1 Like