1. Calibration-Based Qubit Selection
Load ibm_torino's latest calibration CSV file and sort qubits to minimize error and decoherence using a custom fitness function based on:
√X (sx) error rate ϵ_√X
T_1 relaxation time
T_2 dephasing time
Select the best 80 qubits by sorting:
argmin_(S⊂Q, ∣S∣=80) ∑_(qi∈S) [(α)(ϵ_X)^(i) - (β)(T_1)^(i) - (γ)(T_2)^(i)]
using positive weights α, β, γ to prioritize high-coherence, low-error qubits.
2. Register Initialization
Allocate:
80-Qubit quantum register Q = {q_0, q_1, ..., q_79}
80-Bit classical register C = {c_0, c_1, ..., c_79}
These allow full resolution of vacuum state configurations across each qubit after execution.
3. Topological Field Loop Encoding
Define 4 cavity sectors, each containing 20 qubits:
Loop_1 = {q_0, ..., q_19}, Loop_2 = {q_20, ..., q_39}, ...
Each loop is topologically closed using a braid-encode-close cycle:
Apply CX gates in a ring structure (forward and closing gates)
Inject a twist-phase via RZ(θ) on each step
For each loop L = {q_i, q_(i+1), ..., q_(i+19}:
CX(q_k, q_(k+1)); RZ(θ)(q_(k+1)) for k ∈ [i, i + 18]
CX(q_(i+19), q_i); RZ(θ)(q_i) to close loop
where the twist angle is:
θ = π/6
This encodes a topological cavity similar to vacuum boundaries in Möbius/toroidal space, where the vacuum modes wrap non-trivially.
4. Inter-Loop Tunneling Interference
To create Casimir tunneling across non-adjacent cavities, apply:
CX(q_i, q_(i+60)); RZ(−θ)(q_(i+60)) for i ∈ [0, 19]
This bridges qubits from Loop 1 <-> Loop 4, encoding a non-contractible braid interference channel and simulating virtual photon tunneling across disconnected regions, an analog to topological vacuum coupling.
5. Global Field Collapse Layer
Finalize the circuit with a global interference and measurement layer to collapse the field topology. Every adjacent qubit pair undergoes:
CX(q_(2k), q_(2k+1)); RZ(π/4)(q_(2k+1)) for k=0, 1, ..., 39
This layer allows the Casimir-like field modes to destructively or constructively interfere, depending on the internal topological structure, before final readout.
6. Measurement and Transpile
Each qubit q_i is measured into its corresponding classical bit c_i. The full state vector is projected into a classical string b ∈ {0, 1}^80. The circuit is transpiled.
7. Parity Sector Extraction
Define a loop parity observable:
parity(b, I) = ∑_(i∈I) b_i mod 2
for each loop sector I ∈ {Loop_1, ..., Loop_4}
The parity failure rate for a loop is:
P_fail^(L_k) = 1/N ∑_b f(b)(1_{parity(b, L_k)=1)}
where:
f(b) = number of occurrences of bitstring b
N = 32768
This quantifies how often virtual photons in each cavity settle into an odd (energy-nonzero) state, analogous to zero-point vacuum instability.
8. Output, Visualization, Json
The result is saved to a JSON that includes raw counts, twist angle, and loop parity failure rates. The result is also visualized with an initial histogram.
Code:
# Main circuit
# Imports
import json, logging, pandas as pd
from math import pi
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, transpile
from qiskit_ibm_runtime import QiskitRuntimeService, SamplerV2
from qiskit.circuit.library import CXGate, RZGate
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
logging.basicConfig(level=logging. INFO)
log = logging.getLogger(__name__)
# IBMQ
TOKEN = "YOUR_IBMQ_KEY_O-`"
service = QiskitRuntimeService(
channel="ibm_cloud",
token=TOKEN,
instance=("YOUR_IBMQ_CRN")
)
backend = service.backend("ibm_torino")
# Load and pick best 80 qubits from calibration
def best_qubits(csv_path: str, n: int) -> list[int]:
df = pd. read_csv(csv_path)
df.columns = df.columns.str.strip()
order = df.sort_values(
["√x (sx) error", "T1 (us)", "T2 (us)"],
ascending=[True, False, False]
)
winners = order["Qubit"].head(n).tolist()
log. info("Best qubits: %s", winners)
return winners
cal_csv = "/Users/steventippeconnic/Downloads/ibm_torino_calibrations_2025-06-04T13_57_30Z.csv"
physical = best_qubits(cal_csv, 80)
# Quantum and classical registers
qreg = QuantumRegister(80, "q")
creg = ClassicalRegister(80, "c")
qc = QuantumCircuit(qreg, creg)
# Topological Casimir Encoding for multi-loop interference
braid_loops = [
list(range(0, 20)), # Loop 1
list(range(20, 40)), # Loop 2
list(range(40, 60)), # Loop 3
list(range(60, 80)) # Loop 4
]
# Braiding across loops with twist phases
twist_angle = pi / 6
for loop in braid_loops:
for i in range(len(loop) - 1):
qc.append(CXGate(), [qreg[loop[i]], qreg[loop[i + 1]]])
qc.append(RZGate(twist_angle), [qreg[loop[i + 1]]])
# Close the loop
qc.append(CXGate(), [qreg[loop[-1]], qreg[loop[0]]])
qc.append(RZGate(twist_angle), [qreg[loop[0]]])
qc.barrier()
# Global entanglement braid across loop boundaries to simulate tunneling
for i in range(0, 20):
qc.append(CXGate(), [qreg[i], qreg[i + 60]])
qc.append(RZGate(-twist_angle), [qreg[i + 60]])
qc.barrier()
# Final interference layer to simulate field collapse
for i in range(0, 80, 2):
qc.append(CXGate(), [qreg[i], qreg[i + 1]])
qc.append(RZGate(pi / 4), [qreg[i + 1]])
# Measure all qubits
qc.measure(qreg, creg)
# Transpile with initial layout on selected qubits
trans = transpile(
qc,
backend=backend,
initial_layout=physical,
optimization_level=3
)
# Execute
sampler = SamplerV2(mode=backend)
job = sampler. run([trans], shots=32768)
result = job.result()
# Extract counts
creg_name = trans.cregs[0].name
counts = result[0].data.__getattribute__(creg_name).get_counts()
# Compute observable parity of each loop as proxy for vacuum pressure state
def parity(bits: str, idxs: list[int]) -> int:
return sum(int(bits[i]) for i in idxs) % 2
loop_indices = [
list(range(0, 20)),
list(range(20, 40)),
list(range(40, 60)),
list(range(60, 80))
]
parity_distributions = {
f"loop_{i+1}_odd": sum(f for b, f in counts.items() if parity(b, loop_indices[i]) == 1)
for i in range(4)
}
total_shots = sum(counts.values())
parity_rates = {
k: v / total_shots for k, v in parity_distributions.items()
}
# Json
output = {
"experiment_name": "Multi-Cavity Topological Casimir Tunneling (80 qubits)",
"twist_angle": float(twist_angle),
"raw_counts": counts,
"loop_parity_failure_rates": parity_rates
}
json_path = "/Users/steventippeconnic/Documents/QC/Topological_Casimir_Tunneling_0.json"
with open(json_path, "w") as fp:
json.dump(output, fp, indent=4)
log. info("Results saved → %s", json_path)
# Visual
plot_histogram(counts, title="Casimir Topological Mode Outcomes")
plt. show()
# End
/////////////////////////////////////////////////////////////////
# Code for all visuals from experiment JSON
import json
import matplotlib.pyplot as plt
from qiskit.visualization import plot_histogram
from collections import Counter
import numpy as np
from math import log2
# Load results
path = '/Users/steventippeconnic/Documents/QC/Topological_Casimir_Tunneling_0.json'
with open(path, 'r') as f:
data = json.load(f)
counts = data['raw_counts']
bitstrings = list(counts.keys())
frequencies = list(counts.values())
shots = sum(frequencies)
# Helper to extract loop parity
def loop_parity(bitstring, indices):
return sum(int(bitstring[i]) for i in indices) % 2
# Define loop indices
loop1 = list(range(0, 20))
loop2 = list(range(20, 40))
loop3 = list(range(40, 60))
loop4 = list(range(60, 80))
# Compute parities
loop1_parity = [loop_parity(b, loop1) for b in bitstrings]
loop2_parity = [loop_parity(b, loop2) for b in bitstrings]
loop3_parity = [loop_parity(b, loop3) for b in bitstrings]
loop4_parity = [loop_parity(b, loop4) for b in bitstrings]
# Parity Failure Rates Histogram
loop_failures = {
"Loop 1": sum(f for i, f in enumerate(frequencies) if loop1_parity[i] == 1) / shots,
"Loop 2": sum(f for i, f in enumerate(frequencies) if loop2_parity[i] == 1) / shots,
"Loop 3": sum(f for i, f in enumerate(frequencies) if loop3_parity[i] == 1) / shots,
"Loop 4": sum(f for i, f in enumerate(frequencies) if loop4_parity[i] == 1) / shots,
}
plt.figure()
plt.bar(loop_failures.keys(), loop_failures.values())
plt.ylim(0, 1)
plt.title("Loop Parity Failure Rates")
plt.ylabel("Failure Rate (Odd Parity)")
plt.xlabel("Cavity Loop")
plt.show()
# Bitstring Hamming Weights
hamming_weights = [sum(int(bit) for bit in b) for b in bitstrings]
hw_dist = Counter(hamming_weights)
plt.figure()
plt.bar(hw_dist.keys(), [hw_dist[k] for k in sorted(hw_dist.keys())])
plt.title("Hamming Weight Distribution")
plt.xlabel("Hamming Weight")
plt.ylabel("Frequency")
plt.show()
# Joint Parity Distribution (Loop1 vs Loop4)
joint_parity = Counter((loop1_parity[i], loop4_parity[i]) for i in range(len(bitstrings)))
labels = ['Even-Even', 'Even-Odd', 'Odd-Even', 'Odd-Odd']
values = [joint_parity[(0,0)], joint_parity[(0,1)], joint_parity[(1,0)], joint_parity[(1,1)]]
plt.figure()
plt.bar(labels, values)
plt.title("Joint Parity: Loop 1 vs Loop 4")
plt.ylabel("Counts")
plt.xlabel("Parity Sector")
plt.show()
# Rank-Ordered Bitstring Frequencies (Log Scale)
sorted_freqs = sorted(frequencies, reverse=True)
ranks = np.arange(1, len(sorted_freqs) + 1)
plt.figure()
plt.plot(ranks, sorted_freqs)
plt.yscale('log')
plt.title("Bitstring Frequency Rank Plot")
plt.xlabel("Rank")
plt.ylabel("Frequency (log scale)")
plt.grid(True)
plt.show()
# Loop Entropy Contribution Spectrum
loop_entropy = {}
for label, loop in zip(["Loop 1", "Loop 2", "Loop 3", "Loop 4"], [loop1, loop2, loop3, loop4]):
dist = Counter([loop_parity(b, loop) for b in bitstrings])
p0 = sum(frequencies[i] for i in range(len(bitstrings)) if loop_parity(bitstrings[i], loop) == 0) / shots
p1 = 1 - p0
entropy = -p0 * log2(p0) - p1 * log2(p1) if 0 < p0 < 1 else 0
loop_entropy[label] = entropy
plt.figure()
plt.bar(loop_entropy.keys(), loop_entropy.values())
plt.title("Shannon Entropy per Loop Sector")
plt.ylabel("Entropy (bits)")
plt.xlabel("Cavity Loop")
plt.ylim(0, 1)
plt.show()
# Hamming Weight vs Loop 1 Parity
weights_even = []
weights_odd = []
for i, b in enumerate(bitstrings):
hw = sum(int(x) for x in b)
if loop_parity(b, loop1) == 0:
weights_even += [hw] * frequencies[i]
else:
weights_odd += [hw] * frequencies[i]
plt.figure()
plt.hist(weights_even, bins=40, alpha=0.6, label="Loop 1 Even", color="blue")
plt.hist(weights_odd, bins=40, alpha=0.6, label="Loop 1 Odd", color="red")
plt.title("Hamming Weight vs Loop 1 Parity")
plt.xlabel("Hamming Weight")
plt.ylabel("Frequency")
plt.legend()
plt.show()
# Bitstring Parity Shell Map (Loop1 vs Loop2)
joint = Counter((loop_parity(b, loop1), loop_parity(b, loop2)) for b in bitstrings)
labels = ["Even", "Odd"]
Z = np.array([
[joint[(0,0)], joint[(0,1)]],
[joint[(1,0)], joint[(1,1)]]
])
plt.figure()
plt.imshow(Z, cmap="viridis", interpolation="nearest")
plt.xticks([0,1], labels)
plt.yticks([0,1], labels)
plt.title("Parity Shell Heatmap: Loop 1 vs Loop 2")
plt.xlabel("Loop 2 Parity")
plt.ylabel("Loop 1 Parity")
for i in range(2):
for j in range(2):
plt.text(j, i, Z[i, j], ha='center', va='center', color='white')
plt.colorbar()
plt.show()
# Parity Flip Cascade Rank
flip_counts = []
for i in range(len(bitstrings)):
flips = (
loop_parity(bitstrings[i], loop1) +
loop_parity(bitstrings[i], loop2) +
loop_parity(bitstrings[i], loop3) +
loop_parity(bitstrings[i], loop4)
)
flip_counts += [flips] * frequencies[i]
ranked = Counter(flip_counts)
ranks = sorted(ranked.keys())
values = [ranked[k] for k in ranks]
plt.figure()
plt.bar(ranks, values, color='purple')
plt.title("Loop Parity Flip Cascade")
plt.xlabel("Number of Parity-Flipped Loops")
plt.ylabel("Frequency")
plt.xticks(range(5))
plt.show()
# Vacuum Curvature Spectrum (L12 vs L34)
hemisphere_diff = []
for i, b in enumerate(bitstrings):
l12 = loop_parity(b, loop1) + loop_parity(b, loop2)
l34 = loop_parity(b, loop3) + loop_parity(b, loop4)
diff = l12 - l34
hemisphere_diff.extend([diff] * frequencies[i])
diff_counts = Counter(hemisphere_diff)
x = sorted(diff_counts.keys())
y = [diff_counts[k] for k in x]
plt.figure()
plt.bar(x, y, color='darkgreen')
plt.title("Vacuum Curvature Spectrum (Loops 1+2 vs 3+4)")
plt.xlabel("Parity Imbalance (Hemisphere Difference)")
plt.ylabel("Frequency")
plt.show()
# Vacuum Entanglement Field Map (Pairwise Loop Parity Correlation)
pairs = [
("L1-L2", loop1, loop2),
("L1-L3", loop1, loop3),
("L1-L4", loop1, loop4),
("L2-L3", loop2, loop3),
("L2-L4", loop2, loop4),
("L3-L4", loop3, loop4)
]
labels = [p[0] for p in pairs]
values = []
for label, A, B in pairs:
match = sum(
frequencies[i]
for i, b in enumerate(bitstrings)
if loop_parity(b, A) == loop_parity(b, B)
)
values.append(match / shots)
plt.figure()
plt.bar(labels, values, color='navy')
plt.ylim(0, 1)
plt.title("Vacuum Entanglement Field Map")
plt.ylabel("Parity Agreement Rate")
plt.xticks(rotation=45)
plt.show()
# Topological Hamming Shell Energy Landscape
flip_vs_weight = {}
for i, b in enumerate(bitstrings):
hw = sum(int(x) for x in b)
flips = sum(loop_parity(b, loop) for loop in [loop1, loop2, loop3, loop4])
if hw not in flip_vs_weight:
flip_vs_weight[hw] = []
flip_vs_weight[hw].extend([flips] * frequencies[i])
xs = sorted(flip_vs_weight.keys())
ys = [np.mean(flip_vs_weight[x]) for x in xs]
plt.figure()
plt.plot(xs, ys, marker='o', linestyle='-', color='teal')
plt.title("Topological Hamming Shell Energy Landscape")
plt.xlabel("Hamming Weight")
plt.ylabel("Average Loop Parity Flips")
plt.grid(True)
plt.show()
# Dominant Shell Frequency Gradient
hamming_shells = {}
for i, b in enumerate(bitstrings):
hw = sum(int(x) for x in b)
flips = sum(loop_parity(b, loop) for loop in [loop1, loop2, loop3, loop4])
if hw not in hamming_shells:
hamming_shells[hw] = []
hamming_shells[hw].extend([flips] * frequencies[i])
xs = sorted(hamming_shells.keys())
ys = []
for x in xs:
mode = Counter(hamming_shells[x]).most_common(1)[0][0]
ys.append(mode)
plt.figure()
plt.plot(xs, ys, marker='s', linestyle='-', color='crimson')
plt.title("Dominant Parity Shell per Hamming Weight")
plt.xlabel("Hamming Weight")
plt.ylabel("Most Frequent Flip Count")
plt.grid(True)
plt.show()
# End