1. Backend Calibration-Based Qubit Selection
From ibm_sherbrooke's recent calibration data, select the best 49 qubits using the tuple:
q_i -> ((T_1)^(i), (T_2)^(i), (ϵ_√X)^(i))
Ordered them by ascending ϵ_√X and descending T_1, T_2.
Best49 = argmin_(S ⊂ Q, ∣S∣ = 4) ∑_(q_i ∈ S) [α(ϵ_√X)^(i) - β(T_1)^(i) - γ(T_2)^(i)]
for positive weights α, β, γ. These qubits are locked in via the initial_layout during transpilation.
2. Registers
Define three quantum registers:
Data register D = {d_0, …, d_24}
Ancilla register A = {a_0, …, a_23}
Classical register C = {c_0, …, c_48}
Each data qubit sits on a 5 x 5 grid.
3. Twistor Phase Layer: Dual‑Diagonal Encoding
Apply the following unitary transformation:
U_Twist (q) = e^(±iϕZ)
with:
+ϕ for qubits where row = col (main diagonal),
−ϕ for qubits where row + col = 4 (anti‑diagonal),
and ϕ = π/8.
Thus, qubits d_q with (r_q, c_q) satisfying the above are phase-rotated with:
R_Z(2ϕ) = e^(iϕZ)
R_Z(−2ϕ) = e^(−iϕZ)
This embeds two orthogonal Twistor flows, each protecting one logical direction by geometrically redirecting phase-flip noise along null-like geodesics.
4. Stabiliser Syndrome Extraction
Construct the stabilisers using a standard surface-code patch. Each face of the lattice is defined by:
F_(r,c) = {(r, c), (r, c + 1), (r + 1, c), (r + 1, c + 1)}
Select 12 X‑type and 12 Z‑type faces in a chequerboard pattern from the 4 x 4 interior of the grid.
For each X-type stabiliser:
H(aj) -> ∏_(d_i ∈ F) CX(d_i, a_j) -> H(a_j)
For each Z-type stabiliser:
∏_(d_i ∈ F) CX(d_i, a_j)
Measure each ancilla qubit into classical bit c_j, yielding a 24‑bit syndrome string.
5. Data Qubit Measurement
Each of the 25 data qubits was measured directly into classical bits:
d_i -> c_(24 + i)
capturing the projected computational state of the full surface code after one round.
6. Logical Qubit Definitions and Failure Calculation
Two logical qubits are encoded:
Logical‑Z_1: parity of the top row d_0 … d_4, (classical bits):
Z_1 = parity(c_24, c_25, c_26, c_27, c_28)
Logical‑Z_2: parity of the left column d_0, d_5, d_10, d_15, d_20, (bits):
Z_2 = parity(c_24, c_29, c_34, c_39, c_44)
The failure rate for each was defined as:
Fail_Z = (1/N) ∑_(bitstrings b) (1_({Z(b) = 1})) (f(b))
where f(b) is the count of that bitstring among the total N = 8192.
7. Execution and Storage
The circuit is transpiled and executed. The results are saved to a Json that includes "raw_counts", "logical_Z1_failure_rate", "logical_Z2_failure_rate", and "twistor_phase".
Code:
# Main circuit
# Imports
import json, logging, pandas as pd
from math import pi
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, transpile
from qiskit_ibm_runtime import QiskitRuntimeService, SamplerV2
from qiskit.circuit.library import RZGate
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
logging.basicConfig(level=logging. INFO)
log = logging.getLogger(__name__)
# IBMQ
token = "YOUR_IBMQ_API_KEY_O-`"
service = QiskitRuntimeService(
channel="ibm_cloud",
token=token,
instance=("YOUR_IBMQ_INSTANCE_CRN")
)
backend = service.backend("ibm_sherbrooke")
# Pick best 49 physical qubits
def best_qubits(csv_path: str, n: int) -> list[int]:
df = pd. read_csv(csv_path)
df.columns = df.columns.str.strip()
order = df.sort_values(
["√x (sx) error", "T1 (us)", "T2 (us)"],
ascending=[True, False, False]
)
winners = order["Qubit"].head(n).tolist()
log. info("Best qubits: %s", winners)
return winners
cal_csv = "/Users/steventippeconnic/Downloads/ibm_sherbrooke_calibrations_2025-04-17T21_06_13Z.csv"
physical = best_qubits(cal_csv, 49)
# Quantum & classical registers
d_q, a_q, c_q = (QuantumRegister(25, "d"), # data
QuantumRegister(24, "a"), # ancilla
ClassicalRegister(49, "c")) # classical
qc = QuantumCircuit(d_q, a_q, c_q)
# Dual‑Diagonal Twistor RZ layer, +φ on main diagonal(row == col), −φ on anti‑diagonal(row + col == 4)
phi = pi / 8
for q in range(25):
row, col = divmod(q, 5)
if row == col:
qc.append(RZGate(+2 * phi), [d_q[q]])
elif row + col == 4:
qc.append(RZGate(-2 * phi), [d_q[q]])
qc.barrier()
# Surface‑code faces & stabilisers (distance‑3 square patch)
def face(row: int, col: int):
i = row * 5 + col
return [i, i + 1, i + 5, i + 6]
faces = [(r, c) for r in range(4) for c in range(4)]
X_faces = faces[::2][:12] # chequerboard pattern
Z_faces = faces[1::2][:12]
# One full syndrome‑extraction round
for idx, (r, c) in enumerate(X_faces + Z_faces):
stab_type = "X" if idx < 12 else "Z"
data = face(r, c)
a = a_q[idx]
if stab_type == "X":
qc.h(a)
for d in data:
qc. cx(d_q[d], a)
qc.h(a)
else: # Z‑type
for d in data:
qc. cx(d_q[d], a)
qc.measure(a, c_q[idx])
qc.barrier()
# Measure all data qubits
for d in range(25):
qc.measure(d_q[d], c_q[24 + d])
# Transpile on chosen physical qubits
trans = transpile(qc, backend=backend,
initial_layout=physical,
optimization_level=3)
# Execute with SamplerV2 (no Session required)
sampler = SamplerV2(mode=backend)
job = sampler. run([trans], shots=8192)
result = job.result()
creg = trans.cregs[0].name # "c"
counts = result[0].data.__getattribute__(creg).get_counts()
# Logical‑Z failure rates, Z1 = top row, Z2 = left column
def parity(bits: str, idxs: list[int]) -> int:
return sum(int(bits[i]) for i in idxs) & 1
Z1_idx = [24 + i for i in range(5)] # top‑row data bits
Z2_idx = [24 + 5*i for i in range(5)] # left‑col data bits
shots = sum(counts.values())
z1_fail = sum(f for b, f in counts.items() if parity(b, Z1_idx))
z2_fail = sum(f for b, f in counts.items() if parity(b, Z2_idx))
log. info("Logical‑Z1 failure rate: %.5f", z1_fail / shots)
log. info("Logical‑Z2 failure rate: %.5f", z2_fail / shots)
# Results and plot
out = {
"experiment_name": "Dual‑Diagonal Twistor Surface Code (49‑qubit, d=3)",
"twistor_phase": float(phi),
"raw_counts": counts,
"logical_Z1_failure_rate": z1_fail / shots,
"logical_Z2_failure_rate": z2_fail / shots
}
json_path = "/Users/steventippeconnic/Documents/QC/Dual_Diagonal_Twistor_Weighted_Surface_Code_0.json"
with open(json_path, "w") as fp: json.dump(out, fp, indent=4)
log. info("Results saved → %s", json_path)
plot_histogram(counts,
title="Dual‑Diagonal Twistor Surface‑Code Syndrome Outcomes")
plt. show()
# End.
/////////////////////////////////////////////////////////////////
Code for All Visuals From Run Data
import json
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from scipy.stats import entropy
# Load JSON
file_path = '/Users/steventippeconnic/Documents/QC/Dual_Diagonal_Twistor_Weighted_Surface_Code_0.json'
with open(file_path) as f:
data = json.load(f)
counts = data['raw_counts']
bitstrings = list(counts.keys())
shots = sum(counts.values())
# Logical parity functions
def parity(bits, idxs): return sum(int(bits[i]) for i in idxs) % 2
Z1 = [24+i for i in range(5)] # Top row
Z2 = [24+5*i for i in range(5)] # Left col
# Symmetry Heatmap Z1 vs Z2
zmap = np.zeros((2,2))
for b in bitstrings:
p1 = parity(b, Z1)
p2 = parity(b, Z2)
zmap[p1][p2] += 1
plt.figure(figsize=(5,5))
sns.heatmap(zmap, annot=True, fmt='g', cmap='Blues')
plt.title("Logical-Z1 vs Z2 Parity Symmetry")
plt.xlabel("Z2 Parity")
plt.ylabel("Z1 Parity")
plt.xticks([0.5,1.5], ['Even','Odd'])
plt.yticks([0.5,1.5], ['Even','Odd'])
plt.tight_layout()
plt.show() # Pause here
# Polar Parity Density Plot
angle_map = []
for b in bitstrings:
x = parity(b, Z1)
y = parity(b, Z2)
angle = np.arctan2(y, x)
angle_map.append(angle)
angles, freqs = np.unique(angle_map, return_counts=True)
plt.figure(figsize=(6,6))
plt.polar(angles, freqs, marker='o')
plt.title("Polar Parity Flow (Z1 vs Z2)")
plt.tight_layout()
plt.show() # Pause here
# Hamming Distance Shells vs Ideal
ideal = '0'*49
def hamming_distance(a,b): return sum(x != y for x,y in zip(a,b))
shells = Counter()
for b in bitstrings:
d = hamming_distance(b, ideal)
shells[d] += 1
x = sorted(shells.keys())
y = [shells[k] for k in x]
plt.figure(figsize=(8,5))
plt.bar(x, y)
plt.title("Hamming Distance Shells from Ground State")
plt.xlabel("Distance from |0⟩")
plt.ylabel("Frequency")
plt.tight_layout()
plt.show() # Pause here
# Entropy by Ranked Bitstring
probs = np.array([1/shots]*len(bitstrings))
ent = entropy(probs, base=2)
plt.figure(figsize=(10,4))
plt.plot(sorted(probs, reverse=True), 'g')
plt.title(f"Bitstring Rank Probability - Entropy ≈ {ent:.4f} bits")
plt.xlabel("Rank (descending)")
plt.ylabel("Probability")
plt.tight_layout()
plt.show()
# Logical Parity vs Syndrome Weight
weights = {w: [0,0] for w in range(25)} # (fail count, total count)
for b in bitstrings:
syndrome = b[:24]
w = syndrome.count('1')
z1_fail = parity(b, Z1)
weights[w][0] += z1_fail
weights[w][1] += 1
x = sorted(weights.keys())
y = [weights[k][0]/weights[k][1] if weights[k][1] > 0 else 0 for k in x]
plt.figure(figsize=(8,5))
plt.plot(x, y, marker='o', color='darkred')
plt.title("Logical-Z1 Failure Rate vs Syndrome Weight")
plt.xlabel("Syndrome Weight (Number of 1s among 24 ancillas)")
plt.ylabel("P(Z1 Failure | Weight)")
plt.grid(True)
plt.tight_layout()
plt.show()
# X vs Z Plaquette Syndrome Heatmap
X_idxs = range(12)
Z_idxs = range(12, 24)
xz_map = Counter()
for b in bitstrings:
sx = sum(int(b[i]) for i in X_idxs)
sz = sum(int(b[i]) for i in Z_idxs)
xz_map[(sx, sz)] += 1
nx = max(k for k,_ in xz_map)
nz = max(v for _,v in xz_map)
grid = np.zeros((nx+1, nz+1))
for (sx, sz), f in xz_map.items():
grid[sx, sz] = f
plt.figure(figsize=(6,5))
sns.heatmap(grid.T, cmap="crest", cbar=True, annot=False)
plt.title("X vs Z Plaquette Activation (Syndrome Space)")
plt.xlabel("Number of X Stabiliser Clicks")
plt.ylabel("Number of Z Stabiliser Clicks")
plt.tight_layout()
plt.show()
# Classical Bit Activation Map (49 bits)
activation = np.zeros(49)
for b in bitstrings:
for i in range(49):
activation[i] += int(b[i])
activation /= shots
plt.figure(figsize=(12,4))
plt.bar(range(49), activation, color='purple')
plt.title("Classical Register Bit Activation Rate")
plt.xlabel("Bit Index (0–23: ancillas, 24–48: data qubits)")
plt.ylabel("P(bit = 1)")
plt.tight_layout()
plt.show()
# Ancilla Pair Correlation Matrix
A = np.zeros((len(bitstrings), 24)) # 24 ancillas
for i, b in enumerate(bitstrings):
A[i] = [int(b[j]) for j in range(24)]
corr = np.corrcoef(A, rowvar=False)
plt.figure(figsize=(7,6))
sns.heatmap(corr, cmap="vlag", center=0, vmin=-1, vmax=1)
plt.title("Ancilla Pairwise Correlation Matrix (ρ)")
plt.xlabel("Ancilla Index")
plt.ylabel("Ancilla Index")
plt.tight_layout()
plt.show()
# Error Stratification Ring Profile
layers = Counter()
for b in bitstrings:
w = sum(int(b[i]) for i in range(24)) # syndrome weight
d = sum(int(b[i]) for i in range(24, 49)) # data bit weight
layers[w] += d
x = sorted(layers.keys())
y = [layers[k]/shots for k in x]
plt.figure(figsize=(9,5))
plt.plot(x, y, marker='o', color='darkorange')
plt.title("Error Stratification: Data Bit Weight vs Syndrome Shell")
plt.xlabel("Syndrome Weight (0–24)")
plt.ylabel("Mean Data Bit Activity")
plt.grid(True)
plt.tight_layout()
plt.show()
# Geodesic Sector Occupancy Map
# Four data diagonals (center-focused)
sectors = {
"Main Diag": [24, 30, 36, 42, 48],
"Anti Diag": [28, 32, 36, 40, 44],
"Horizontal": [24,25,26,27,28],
"Vertical": [24,29,34,39,44]
}
occupancy = {s: 0 for s in sectors}
for b in bitstrings:
for s in sectors:
occupancy[s] += sum(int(b[i]) for i in sectors[s])
for s in occupancy:
occupancy[s] /= shots
plt.figure(figsize=(7,5))
plt.bar(occupancy.keys(), occupancy.values(), color='teal')
plt.title("Twistor Sector Occupancy (Average Activation per Shot)")
plt.ylabel("Mean Activation")
plt.xticks(rotation=15)
plt.tight_layout()
plt.show()
# Logical Parity vs Ancilla Entropy
bins = {(0,0): [], (0,1): [], (1,0): [], (1,1): []}
for b in bitstrings:
z1 = parity(b, Z1)
z2 = parity(b, Z2)
ancilla = [int(b[i]) for i in range(24)]
h = entropy([np.mean(ancilla), 1 - np.mean(ancilla)], base=2)
bins[(z1,z2)].append(h)
labels = ["Z1=0,Z2=0", "Z1=0,Z2=1", "Z1=1,Z2=0", "Z1=1,Z2=1"]
means = [np.mean(bins[key]) for key in [(0,0), (0,1), (1,0), (1,1)]]
plt.figure(figsize=(7,5))
plt.bar(labels, means, color='slateblue')
plt.title("Ancilla Entropy by Logical Parity Sector")
plt.ylabel("Shannon Entropy (bits)")
plt.tight_layout()
plt.show()
# Midpoint Bit Correlation to Z1/Z2 Failures
bit36 = []
z1s, z2s = [], []
for b in bitstrings:
bit36.append(int(b[36]))
z1s.append(parity(b, Z1))
z2s.append(parity(b, Z2))
mid_z1_corr = np.corrcoef(bit36, z1s)[0,1]
mid_z2_corr = np.corrcoef(bit36, z2s)[0,1]
plt.figure(figsize=(6,5))
plt.bar(["Mid↔Z1", "Mid↔Z2"], [mid_z1_corr, mid_z2_corr], color='crimson')
plt.title("Midpoint Bit (36) Correlation to Logical Failures")
plt.ylabel("Pearson ρ")
plt.ylim(-1,1)
plt.tight_layout()
plt.show()
# End.