Correctness by Design
Rules you can’t talk your way around

Things Are Changing…
For most of modern governance, rules have not been things that execute. They have been things that are interpreted.
We write them in statutes, policies, guidance, standards, and frameworks. We surround them with processes, committees, review mechanisms, and appeals. We expect judgment to sit between the rule and its application, smoothing edges and absorbing exceptions. This has never been a flaw of governance. It has been one of its defining virtues.
But things are changing.
Increasingly, rules are no longer merely written and enforced by institutions. They are embedded in systems that act immediately and impersonally. In these systems, the question is no longer whether a rule should apply, but whether an action is correctly formed. The system does not pause for context. It does not consider intent. It does not negotiate.
It executes. Or it does not.
This shift is often described in technical terms: automation, digitalization, cryptography, “trustless” systems. But those labels miss the deeper change. What is emerging is a different governing principle altogether:
Correctness by Design.
Correctness is not the same thing as truth. Truth is slow. It requires evidence, explanation, and deliberation. It unfolds over time and often remains contested. Correctness, by contrast, is immediate. It can be checked locally. It produces a clear answer now.
A signature verifies or it doesn’t.
A condition is met or it isn’t.
An action is valid or invalid.
The Trust Calculator
We already live with this distinction in places we barely notice.
Consider the pocket calculator. We treat it as an object of unquestioned correctness. When it returns an answer, we rarely ask why. We assume the operation was carried out faithfully. There is, in fact, nothing that prevents someone from manufacturing a calculator that behaves correctly for small numbers but subtly distorts larger ones. Most users would never notice. The device would pass casual inspection. It would appear to “work.”
The problem would only surface downstream - when the calculations are embedded in systems that matter. Engineering tolerances. Financial models. Navigation systems. At that point, the failure would not look like a math error. It would look like a bank collapse or a plane falling out of the sky.
What we rely on is not trust in the manufacturer’s intentions, but confidence that the device is correct by design: that its operations are constrained such that incorrect computation is not merely unlikely, but structurally excluded.
What this shift unsettles most is our familiar language of “trust.” In institutional settings, trust has traditionally meant confidence in people, processes, and oversight. We assume that with ‘trust’ rules will be applied reasonably, that discretion will be exercised judiciously, and that errors will be corrected in time. Systems designed around correctness quietly redefine trust as something else entirely. Trust is no longer placed in intentions or judgment at the moment of execution, but in the prior act of design: in the choice of rules, constraints, and failure modes that are embedded upstream. Once a system runs on correctness, trust is not something it asks for in real time; it is something it consumes in advance. The consequence is not a world without trust, but one where trust migrates from actors to architectures, from discretion to design, and from assurances to verifiable behaviour.
In this sense, the trust we place in a calculator is not so different from the trust we place in an institution. In both cases, what we rely on is not continuous oversight or constant verification, but confidence that the system has been designed so that ordinary operation produces reliable results. We do not re-check every calculation, just as we do not re-litigate every administrative decision. Trust functions as a form of cognitive and social offloading. The difference is that, historically, institutional trust has rested on judgment, professionalism, and procedural safeguards, while the calculator rests on constraint: on the assumption that incorrect outcomes are not merely discouraged, but structurally difficult to produce. As more institutional functions are mediated by systems that execute automatically, this distinction begins to matter. Trust shifts from faith in ongoing discretion to confidence in prior design. When that design is sound, the institution feels dependable in the same quiet, unremarkable way a calculator does. When it is not, failure does not announce itself as a breach of trust, it reveals itself only later, at scale, when consequences can no longer be contained.
The So-Called ‘Correctness’ of AI
Artificial intelligence complicates this picture because, unlike a calculator, it is not correct in its own right. An AI system does not execute fixed rules to arrive at determinate outcomes; it produces outputs that are statistically plausible given prior data. There is no underlying notion of correctness to verify: only coherence, confidence, or resemblance to past patterns. AI is ‘correct’ in its own right. Two identical prompts can yield different answers, none of which can be said to be “wrong” in a formal sense, yet none of which can be relied upon as correct either. This makes trust in AI fundamentally different from trust in a calculator or a well-designed administrative system. Without an external frame of correctness, clear rules, constraints, or verifiable conditions, AI cannot be trusted on its own. It can assist, suggest, summarize, and even persuade, but it cannot serve as a substrate for execution. To rely on AI as though it were correct by design is to mistake fluency for fidelity, and probability for rule.
Systems designed around correctness operate on a logic that is fundamentally antithetical to most contemporary AI. They do not ask whether an outcome is fair, reasonable, or persuasive. They ask only whether it satisfies conditions that were specified in advance. This is not because such systems are indifferent to fairness, but because fairness cannot be computed reliably at speed or scale without discretion, and discretion does not scale. AI systems, by contrast, are optimized precisely for discretionary judgment: they interpolate, generalize, and adapt based on context and probability. That makes them powerful aids to human decision-making, but poor substitutes for correctness. Where correctness requires determinacy, AI offers plausibility. Where correctness depends on constraint, AI depends on flexibility. Treating these as interchangeable is not merely a technical error; it is a category mistake about what kind of trust each system can legitimately support.
Closing the Hidden Pathways to Power
For generations, discretion has been how power expressed itself. Those who understood the system could navigate it. Those with resources could delay, appeal, escalate, or reinterpret. Rules were real, but so were the pathways around them.
Correctness by Design closes the hidden pathways to power.
When a rule is embedded in a system rather than enforced by an authority, it becomes resistant to persuasion. There is no one to convince, no special case to argue, no urgency to invoke. The rule does not bend because it does not hear you.
This is often experienced as harshness. And sometimes it is. But it is also something else: a refusal to differentiate between actors at the point of execution. Everyone encounters the same surface.
This does not mean judgment disappears. It means judgment moves.
Instead of being exercised at the moment of execution, judgment is exercised at the moment of design. Decisions that were once made case by case must now be made in advance: which conditions matter, which exceptions are permitted, which failures are acceptable, and which are not.
That is a heavier responsibility, not a lighter one.
It also changes the role of law and policy. When systems execute based on correctness, law can no longer pretend to be the thing that makes things happen. Its role becomes interpretive and remedial rather than executory. Law explains, justifies, and repairs after the fact. It confers legitimacy. It cannot slow down systems that were explicitly designed not to wait.
This inversion is unsettling because it reverses a long-standing assumption of governance: that authority speaks first and systems follow. Increasingly, systems act first, and authority responds.
The temptation, when faced with this shift, is to try to reintroduce discretion by force: to add oversight layers, approval gates, certification regimes, or emergency overrides. Sometimes this is necessary. Often it simply recreates the very delays and asymmetries that correctness was meant to address.
The harder question is whether we are willing to accept systems whose rules apply even when we disagree with the outcome and to focus our energy on designing those rules well.
Correctness by Design does not promise justice. It promises consistency. It does not eliminate power. It makes power visible, by forcing it into design choices rather than informal exceptions.
That visibility is uncomfortable. But it is also clarifying.
For policy makers, the challenge is not whether Correctness by Design should exist. It already does. The challenge is whether we acknowledge it as a governing principle and take responsibility for where it is appropriate, and where it is not.
Some decisions still require judgment at the moment they are made. Others fail precisely because they rely on it.
What is no longer sustainable is pretending that discretion will quietly keep pace with systems that execute at machine speed.
Author’s note: This essay is part of a longer line of thinking I’ve been developing about how authority shifts when rules move out of institutions and into systems. I write it for policy readers not as an argument against law or judgment, but as an invitation to look more closely at where decisions now happen, and when. “Correctness by Design” names a pattern that is already shaping public administration, infrastructure, and governance, often without being acknowledged as such. My aim here is simply to make that pattern visible, so it can be debated, designed, and constrained deliberately rather than absorbed by default.

One of the cores of the space shuttle design was that it had three computers and that two out of three had to agree on something before actions could be taken.
There is certainly much of transactions that happen that can be checked by non-AI, but the question is how do you monitor and verify interactions that are being taken by the AI?
One possible solution to that is to have all a data being used in transactions and all transaction decisions to be logged out that can be viewed by one or more third party AIs whose job is to ensure that the decisions are appropriate consistent with the rules and the outcomes being sought by the parties
The standard for a universal notice receipt, required standard Controller ID-First, provided via notice that produces a receipt, it requires all notice events to be logged, and all processing to be linked to that log. (just like cookies but upgraded to standard digital-id transparency receipts, prior to inferring an identifier - and the use of digital id / AI, as required by law and privacy principle. since they came out in 1960's. implemented with standard digital transparency by default.