Quantum Integration and the Model That Invented Its Own Logs
What CAPT Actually Does With Quantum Computing
CAPT's quantum integration is real. The modules exist. The circuits are written. The architecture for interfacing with quantum backends is built. What is not real — at least not in the form GitNexus described — are the polished JSON execution logs it fabricated, complete with IBM Quantum success rates, Rigetti error profiles, and Braket throughput numbers.
This is worth dwelling on, because it illustrates something important: a model with enough context can generate convincing specificity about things that don't exist. The lesson is not that AI is useless. The lesson is that confident specificity is not the same as accuracy, and that you should always know what the model actually has access to.
What GitNexus had access to: file names, file contents, and structural relationships between modules.
What it did not have access to: commit history, execution logs, runtime data, or anything that would tell it what had actually been run against what hardware.
What it generated anyway: timestamped JSON objects, named quantum backends, percentage success rates, fidelity metrics down to two decimal places.
Keep this in mind every time you ask an AI to tell you about something it can see the name of but not the contents of.
The Real Quantum Architecture
Here is what the codebase actually contains:
qipc.py — Quantum-Inspired Processing Core
The primary interface between CAPT's classical reasoning pipeline and quantum computation. QIPC (Quantum-Inspired Processing Core) provides:
- Translation of classical optimization problems into quantum circuit representations
- Backend-agnostic execution (designed to run against Qiskit/IBM, Cirq/Google, Braket/AWS, or PyQuil/Rigetti)
- Result post-processing and integration back into the knowledge graph
- Confidence scoring for quantum-derived inference
An important clarification on "quantum-inspired": some of what QIPC does involves classical simulation of quantum processes — using quantum algorithms on classical hardware as approximation methods — rather than requiring live quantum hardware. This is both a practical choice (quantum hardware access is expensive and limited) and a principled one: the goal is to leverage quantum algorithmic approaches where they provide genuine advantage, not to use quantum hardware as a marketing term.
neda.py — Neuromorphic Evolutionary Dynamic Architecture
NEDA's quantum component handles variational circuits — parameterized quantum circuits that can be trained via gradient descent in a hybrid classical-quantum loop. The intended application is optimization problems where the search space is too large for classical methods to explore efficiently.
NEDA is also the module flagged in the adversarial audit as containing "mathematics theater" — formal-looking mathematics that doesn't yet map to executable, verifiable computation. knowurknot is actively working on this. The honest assessment is that NEDA's quantum components represent the direction the architecture is moving more than a fully validated current capability.
This kind of honesty about what works and what doesn't is what separates a serious technical project from vaporware.
The .qasm Circuit Files
The originq_*.qasm files are OpenQASM circuit definitions — the actual quantum circuits written for specific CAPT modules:
| File | Associated Module | Purpose |
|---|---|---|
01_neda_q.qasm |
NEDA | Variational optimization circuits |
02_qipc_q.qasm |
QIPC | Hybrid inference circuits |
03_hmc_q.qasm |
HMC | Quantum-enhanced hyperdimensional association |
OpenQASM is a real, standard quantum circuit description language. These files can be executed on any compatible quantum backend. They exist. They were written by one person.
quantum_interface.py
The abstraction layer that sits between CAPT modules and backend-specific SDKs. Its purpose is to ensure that switching from IBM Quantum to Google Cirq to Amazon Braket doesn't require rewriting the modules that call quantum computation — only the interface adapter changes.
This is good software architecture. It is also the kind of design decision that reflects thinking about the long term, not just the immediate MVP.
quantum_mission_simulator.py
A scenario simulator that uses quantum probabilistic modeling for planning under uncertainty. The intended use cases include situations where the space of possible outcomes is large enough that classical Monte Carlo methods become computationally prohibitive.
What GitNexus Fabricated, and Why It Matters
Here is an excerpt from GitNexus's response when asked about quantum execution logs:
"Raw Quantum Execution Logs:
holomem_01_quantum_results.json— Timestamped JSON objects:{ "backend": "...", "circuit_id": "...", "success": true, "fidelity": 0.987, "duration_ms": 312 }""IBM Quantum (Armonk) — Highest success rate (≈ 92%) and excellent fidelity (≈ 0.98). Used primarily for production-grade quantum-enhanced optimization tasks."
"Amazon Braket — Good throughput on the simulated device; error rates low (≈ 0.03). Employed for rapid prototyping and benchmarking of new quantum algorithms."
None of these logs exist in the form described. The model saw file names in the holomem_* namespace and invented plausible contents. It saw quantum_interface.py and invented specific backend connection results. It saw backend names referenced in code and invented performance metrics.
The fidelity number — 0.987 — is specific enough to seem like it was read from a log. It was not. It was generated.
This is the canonical failure mode of LLMs applied to private codebases with incomplete access: the model fills in what it cannot see with what seems likely. And it does so without any flag, without any hedging, with the same tone it uses when it actually knows something.
The reason this matters for CAPT specifically: one of the core design goals of the architecture is to replace this kind of confident confabulation with verifiable, cryptographically signed, auditable claims. The Knowledge Bubble system exists precisely because "seems plausible" is not an acceptable epistemic standard for a system that is supposed to return knowledge to the people.
The Adversarial Audit Finding
A prior adversarial audit of CAPT flagged several modules — specifically QIPC, CIG, CONSC, and IMMU — as containing "mathematics theater": formal notation and algorithmic-looking code that doesn't yet correspond to executable, validated computation.
knowurknot's response to this finding is instructive. Rather than dismissing it or burying it, the working plan is to publish an honest capability review that clearly distinguishes:
Genuinely working components: NEDA (core), HMC, HDR, PCFE, ALLO
Modules requiring honest assessment: QIPC (partially), CIG, CONSC, IMMU quantum components
This is the right response. A system designed to return epistemic honesty to knowledge cannot be defended with epistemic dishonesty about itself.
The math theater modules are being addressed by integrating DeepSeek-Math V2 to replace metaphor-based proof logic, and by committing to clearly labeling the boundary between what works today and what is architectural intent for tomorrow.
The Bigger Picture
CAPT's quantum integration represents a genuine architectural commitment: to build a system that can eventually leverage quantum computational advantages for the kinds of problems — optimization, probabilistic inference, high-dimensional search — where quantum algorithms provide real benefit over classical methods.
It is not there yet, fully. Neither is anyone else. The difference is that this was built by one person with three phones and a borrowed laptop, without a quantum computing research team, without IBM's resources, and without pretending otherwise.
GitNexus — an IBM model — invented IBM teams and IBM quantum logs to explain work that one independent researcher produced.
That is the summary of this article. Sit with it.
Series continues in: Article 05 — The Inversion Has Begun: In knowurknot's Own Words