Semantically Speaking: Context Without Ontology Breaks
Why Decision-Aware AI Requires Managed Meaning, Not Just Memory
Over the last two years, a quiet realization has been spreading through teams building production AI systems. The problem is not the models. It is not the data volume. It is not orchestration.
The problem is that our systems do not remember how decisions were made.
They remember outcomes. They remember final states. They remember what happened.
They do not remember why.
This realization sits at the heart of recent work by Jaya Gupta and Ashu Garg’s essay AI’s Trillion Dollar Opportunity: Context Graphs and the follow-on essay by Jaya Gupta Where Context Graphs Materialize. Together, these pieces give language to something many builders have felt but struggled to articulate. Modern systems are excellent at recording state, but remarkably poor at preserving the reasoning, authority, and judgment that make actions legitimate at the moment they occur.
As AI agents increasingly act across systems, make recommendations, and execute actions on behalf of organizations, this absence becomes dangerous. Without decision memory, agents cannot be trusted. Without precedent, they cannot generalize responsibly. Without context, they cannot explain themselves.
The context graph thesis identifies the missing layer clearly. Once decisions themselves become durable, inspect-able artifacts, however, a deeper requirement becomes unavoidable. Structure alone cannot stabilize meaning. Memory alone cannot enforce legitimacy. The moment decisions are treated as first-class entities, systems are forced to confront questions that context alone cannot answer.
That confrontation is ontological.
The Enterprise Bottleneck Is Not Intelligence
Enterprise software created enormous value by becoming systems of record. Salesforce became the system of record for customers. Workday for employees. SAP for operations. Ownership of canonical data meant ownership of workflows and economic leverage.
AI does not eliminate the need for systems of record. It raises the bar. As Jamin Ball in has argued, in Long Live Systems of Record, that agents do not replace systems of record. They demand better ones.
But better does not mean more predictive.
Most enterprise systems record end states. A discount was approved. A ticket was closed. An incident was resolved. These outcomes are treated as sufficient. The reasoning that produced them is informal, scattered, and external to the system.
This worked when software was passive and humans were responsible for judgment. It fails when software becomes active.
An AI agent that approves discounts, routes incidents, or grants access is not merely executing workflows. It is exercising authority. Authority without justification is unacceptable in any serious institution.
Context graphs surface this gap correctly.
What Context Graphs Get Right
A context graph, as Gupta describes it, records decision traces rather than just outcomes.
Not only that a discount was approved, but how rules were interpreted, where exceptions were invoked, who authorized the action, and why it was allowed to proceed.
This is a crucial move. It sharply distinguishes the context graph thesis from superficial calls for better logging or model introspection. The argument is not about exposing chains of thought. It is about preserving institutional memory.
In that sense, context graphs are closer to precedent tracking than audit logs. They are meant to support questions like:
What precedent was followed here?
Which rules were applicable at the time?
Who had authority to act?
Under what conditions did this exception hold?
These are governance questions, not modeling questions.
They are the right questions.
Ontology Is Not Missing. It Is Unmanaged.
Ontology is often spoken of as if it were a single, uniform layer that can be added beneath context graphs to stabilize meaning. This framing obscures the real issue.
Ontology is not one thing.
In practice, systems already rely on many ontological commitments at once. They make assumptions about rules, roles, permissions, authorities, interpretations, explanations, events, and time. These commitments are usually implicit, fragmented, and inconsistent, but they are present.
The problem is not that ontology is absent.
The problem is that ontological commitments are unmanaged.
When decisions are recorded without enforcing distinctions between kinds, meaning becomes unstable. Policy collapses into precedent. Interpretation collapses into rule. Permission collapses into action. Queries appear to work until governance depends on them. At that point, ambiguity becomes failure.
Ontology begins not by labeling nodes, but by separating kinds and enforcing their relationships.
Context graphs make this necessity visible. They do not resolve it on their own.
A Decision Is Not an Object Without Conditions
One of the strongest intuitions in the context graph thesis is that decisions must become first-class entities. That insight carries a subtle risk.
A decision is not an object in the same way a record or transaction is an object.
A decision exists only relative to conditions:
a set of applicable rules or norms
an authority structure
a context of interpretation
a temporal scope
a justification frame
Detached from these conditions, what remains is merely an outcome.
This distinction matters. Treating decisions as free-standing objects risks flattening them into static artifacts and erasing the very thing decision memory is meant to preserve. A stored decision without its enabling conditions is not precedent. It is anecdote.
For AI systems to generalize responsibly, they must not merely recall prior decisions. They must recognize when the conditions that made those decisions valid no longer hold.
That requires representing decisions as condition-bound acts, not static artifacts.
Category Errors Become System Failures
Richer structure does not eliminate ambiguity by itself. Structure always presupposes metaphysical commitments, whether acknowledged or not.
Several category confusions recur silently in decision systems:
rules versus interpretations of rules
exceptions versus errors
authority versus execution
explanation versus justification
similarity versus comparability
When these distinctions are not enforced, systems compensate with inference, probability, or similarity. That can work for retrieval. It fails for governance.
Many of these distinctions are ontic rather than epistemic. They concern what is binding, permitted, or effective, not merely what is believed or inferred. Once agents act, these differences become operational constraints.
Context graphs surface these problems. Ontological commitments determine whether systems can resolve them.
Where Context Graphs Materialize, Governance Becomes Harder
Gupta’s second essay makes an additional and crucial clarification. Context graphs do not appear fully formed through schema design. They materialize gradually through the capture of real decisions as they are committed, often imperfectly and under pressure, shaped by human judgment that resists formalization.
This bottom-up emergence is a strength. It allows systems to learn how organizations actually operate, not just how they claim to.
It also sharpens the problem.
When context graphs emerge from practice, they inherit not only institutional knowledge, but institutional ambiguity. Repeated exceptions begin to look like policy. Similar decisions begin to look like precedent. Heuristics quietly harden into authority.
Once context graphs materialize, systems must be able to distinguish between what was done, what was permitted, and what should generalize. Without that distinction, decision memory compounds while governance degrades.
Context Graphs as Proof-Carrying Structures
If context graphs are to support trustworthy autonomy, they must do more than store traces. They must justify actions.
Not by exposing model internals or replaying reasoning, but by answering a stronger institutional question: by what right did this occur?
A mature context graph should be understood as proof-carrying rather than merely descriptive. Each decision node implicitly asserts that certain conditions were satisfied. A rule applied. Authority was valid. Scope was respected. An exception was warranted.
When these justifications cannot be represented, they are silently assumed. When assumptions fail, trust collapses.
This does not require formal proofs or symbolic logic. It requires the ability to distinguish what counts as justification from what does not.
That is how real institutions operate. Decisions stand not because they happened, but because they were permitted under the rules in force at the time.
From Decision Memory to Governed Reasoning
Context graphs correctly identify the missing memory layer in AI systems. But memory without kind distinctions, conditions, and justification is not governance. It is accumulation.
Ontology, properly understood, is not an academic indulgence. It is how institutions remember why they were allowed to act.
As AI systems inherit institutional authority, that distinction becomes decisive.
Context reveals the problem.
Ontology determines whether it can be solved.
That is where trustworthy AI will ultimately be built.
About Interstellar Semantics
Interstellar Semantics helps organizations design and operationalize formal ontologies and enterprise meaning structures grounded in open standards such as RDF, OWL, and SPARQL. As platforms consolidate and infrastructure layers grow more opaque, we help clients retain control over meaning, ensure interoperability across systems, and build semantic foundations that remain stable through re-platforming and AI adoption.
We are currently taking on a select group of clients.
If your organization is preparing for a future where semantic clarity, formal modeling, and ontological rigor are first order concerns, we’re here to help.




