3D noise-mapping sanity check on code

This is my first time posting here, and I’m still pretty new to device-level characterization.
I’m self-taught and have only been using IBM’s free 10-minute trial sessions, so my access is very limited. Because of that, I might be misunderstanding pieces of this, and I’d really appreciate any corrections or guidance.

If I repeatedly run the same global circuit over time, can I treat the resulting parity visibility as a probe of correlated noise across the device?

steps are
Reconstruct an approximate 3D layout of qubits using graph distances + MDS, Build a time-series of parity visibility, Run spectral & cross-spectral analysis to check whether any coherent correlations appear across the device, Interpolate the results into a smooth 3D field for visualization, Run surrogate tests to check if any structure is statistically meaningful.

I’m just trying to get better at understanding how noise behaves spatially and temporally on hardware. I’m presuming all machines are susceptible to external noise and if i can pre-map consistent noise perhaps i can use that in all future runs to clean up my results.
Since I only have the free 10-minute windows on IBM hardware:

I’m not sure whether my dataset is large enough

I’m not sure whether my interpretation is valid

I might have made conceptual mistakes without realizing it, so I’d be really grateful if anyone could review the code and tell me whether the approach makes sense, maybe even test the code on a real backend if you have more quota than I do. I have ran various version and much smaller runs. The results map fairly well, actually a little better than I thought they would.

So even “this whole approach is nuts” would be helpful to hear lol.

Code looks like this (though I tweak it regularly and getting it to run is often half the fun.

import json
import numpy as np
from datetime import datetime
from collections import defaultdict

from qiskit import QuantumCircuit, transpile
from qiskit_ibm_runtime import QiskitRuntimeService, SamplerV2

from sklearn.manifold import MDS
from scipy.signal import welch, csd
from scipy.interpolate import RBFInterpolator

# -----------------------------------------------------

# CONFIG (replace token and backend name)

# -----------------------------------------------------

API_TOKEN = “YOUR_IBM_API_TOKEN”
BACKEND_NAME = “ibm_marrakesh”
N_QUBITS = 127
REPEATS = 200   # kept moderate for easier testing
SHOTS = 4096

service = QiskitRuntimeService(channel=“ibm_quantum”, token=API_TOKEN)
backend = service.backend(BACKEND_NAME)

# -----------------------------------------------------

# 1. PARITY CIRCUIT

# -----------------------------------------------------

def parity_circuit(n_qubits):
qc = QuantumCircuit(n_qubits)
qc.h(0)
for i in range(n_qubits - 1):
qc.cx(i, i + 1)
qc.h(range(n_qubits))
qc.measure_all()
return qc

# -----------------------------------------------------

# 2. SUBMIT JOBS

# -----------------------------------------------------

job_ids = [ ]
for r in range(REPEATS):
qc = parity_circuit(N_QUBITS)
tqc = transpile(qc, backend=backend, optimization_level=1)
job = SamplerV2(mode=backend).run(\[tqc\], shots=SHOTS)
job_ids.append(job.job_id())
print(f"Submitted batch {r+1}/{REPEATS}")

with open(“job_ids.txt”, “w”) as f:
f.write(“\\n”.join(job_ids))

# -----------------------------------------------------

# 3. FETCH RESULTS (run later)

# -----------------------------------------------------

def fetch_results(job_ids):
data = [ ]
for jid in job_ids:
job = service.job(jid)
result = job.result()
counts = result\[0\].data.meas.get_counts()
t = job.metrics()\[“timestamps”\]\[“finished”\]
data.append({“counts”: counts, “timestamp”: t})
return data

# -----------------------------------------------------

# 4. PARITY VISIBILITY

# -----------------------------------------------------

def parity_visibility(counts):
total = sum(counts.values())
even = sum(v for k, v in counts.items() if k.count(“1”) % 2 == 0)
odd = total - even
return abs(even - odd) / (total + 1e-12)

def build_time_series(results):
ts = \[datetime.fromisoformat(r\[“timestamp”\]).timestamp() for r in results\]
vis = \[parity_visibility(r\[“counts”\]) for r in results\]


ts = np.array(ts)
vis = np.array(vis)

idx = np.argsort(ts)
return ts[idx], vis[idx]


# -----------------------------------------------------

# 5. MDS LAYOUT (approximate)

# -----------------------------------------------------

def mds_layout(backend):
cfg = backend.configuration()
n = cfg.n_qubits
cm = cfg.coupling_map


# build graph-distance matrix
D = np.full((n, n), np.inf)
for i in range(n):
    D[i, i] = 0
for a, b in cm:
    D[a, b] = 1
    D[b, a] = 1

# Floyd–Warshall for all-pairs shortest paths
for k in range(n):
    for i in range(n):
        for j in range(n):
            D[i, j] = min(D[i, j], D[i, k] + D[k, j])

# MDS embedding in 3D
coords = MDS(
    n_components=3, dissimilarity="precomputed", random_state=0
).fit_transform(D)

return coords


# -----------------------------------------------------

# 6. CROSS-SPECTRAL WAVEVECTOR

# -----------------------------------------------------

def estimate_wavevector(ts, vis, coords, freq_hint=None):
fs = 1.0 / np.median(np.diff(ts))


# dominant frequency
f, P = welch(vis, fs=fs)
if freq_hint is None:
    peak = f[np.argmax(P[1:]) + 1]
else:
    peak = freq_hint

omega = 2 * np.pi * peak

# compute phase delays across qubits by CSD vs a reference
ref = vis
phases = []
A = []
b = []

for q in range(len(coords)):
    # here using same signal for simplicity in this minimal version
    f, C = csd(vis, vis, fs=fs)
    idx = np.argmin(np.abs(f - peak))
    phase = np.angle(C[idx])

    A.append(coords[q])
    b.append(phase)

A = np.array(A)
b = np.array(b)

k_vec, _, _, _ = np.linalg.lstsq(A, b, rcond=None)
return k_vec, peak


# -----------------------------------------------------

# 7. INTERPOLATION FIELD

# -----------------------------------------------------

def interpolate_field(coords, vis, frame=0, res=40):
xs, ys, zs = coords\[:, 0\], coords\[:, 1\], coords\[:, 2\]
vals = np.full(len(coords), vis\[frame\])


interp = RBFInterpolator(np.column_stack([xs, ys, zs]), vals, kernel="gaussian")

grid_x = np.linspace(xs.min(), xs.max(), res)
grid_y = np.linspace(ys.min(), ys.max(), res)
grid_z = np.linspace(zs.min(), zs.max(), res)

X, Y, Z = np.meshgrid(grid_x, grid_y, grid_z)
V = interp(np.column_stack([X.ravel(), Y.ravel(), Z.ravel()])).reshape((res, res, res))

return X, Y, Z, V

Hi @Leon , welcome to the Forum!

From your description it sounds like you want to do something like zero noise extrapolation. Is this correct?

We have this demo that describes the details of performing error mitigation, and specifically zero-noise extrapolation.

You may also be interested in this demo on error mitigation which is based on an IBM paper, but it actually runs on a noisy PennyLane simulator instead of on actual quantum hardware. This way you avoid the issues involved in actually using those 10min you have on IBM machines.

That being said, I’m not sure whether you can “get better at understanding how noise behaves spatially and temporally on hardware” in a way that would actually be consistent over many runs. I know these devices undergo constant calibration (daily?) and there are things like solar flares that can add to the noise. The quantum computer itself has some random noise that cannot be removed, besides any external sources of noise. So it’s a very valid question to ask, I’m just not sure that the nature of the problem is such that we can get a deterministic answer.

Just to be clear, I’m not an expert on IBM hardware of software, so I may be wrong in my assumptions, but I just wanted to share my thoughts in case they help!