Verifiable reasoning for AI decisions.
Ontologic is a proof-of-reasoning protocol that enables AI agents and distributed systems to cryptographically prove the justification behind their actions, not just the actions themselves.A four-hash morpheme proof — rule, inputs, outputs, meaning — anchored on Hedera Consensus Service. Agents prove not just what they did, but why.
Ascension Hackathon
1st Place ($10K)
Hedera's official hackathon — external validation from the ecosystem
Apex Hackathon
Building Now
Feb 17 – Mar 16, 2026 — $250K prize pool
Hedera-Native
HCS Anchored
Verifiable timestamps and total ordering via Hedera Consensus Service
Why This Matters
Networks verify behavior, not justification. There is a gap between action and meaning. As autonomous agents start making consequential choices — moving money, granting access, triggering incidents — we need legible reasoning checkpoints, not just logs. Ontologic makes reasoning provenance a first-class primitive: bind a rule + inputs + outputs + meaning into a cryptographically verifiable proof that any verifier can recompute, without trusting the agent that generated it.
The Thesis
The Problem
LLMs, autonomous agents, and symbolic systems produce outputs through processes that are neither reproducible nor independently verifiable. Blockchains secure state transitions, but not the logical structures that generated them. There is a gap between action and meaning.
Ontologic's Response
Ontologic proposes a deterministic, minimal protocol to close this gap. The protocol formalizes reasoning into three sequential layers — the Triune Proof Model:
Rule Layer (Definition)
A rule is any declarative expectation in canonical form — a logical axiom, a domain constraint, a transformation. The rule is hashed to produce a ruleHash.
Inference Layer (Execution)
Given a rule and inputs, a deterministic procedure evaluates whether the rule holds. Inputs and outputs are serialized and hashed. The inference must be deterministic and externally reproducible.
The Inference Layer corresponds to the Tarski truth-conditional layer, evaluating whether the rule's conditions hold against the given inputs. In Ontologic's CMY color system, this layer is represented by Cyan.Meaning Layer (Attestation)
The results are published as a canonical meaning record via consensus-backed attestation. Hedera Hashgraph satisfies the requirements — total ordering, verifiable timestamps, immutability, low-latency publication — through the Hedera Consensus Service.
The Meaning Layer corresponds to the Floridi information layer, establishing semantic significance and public attestation. In Ontologic's CMY color system, this layer is represented by Yellow.The final proof — the morpheme — binds all three layers into a single verifiable hash:
proofHash = H(ruleHash || inputsHash || outputsHash || meaningHash)
The fourth hash — meaningHash — binds the proof to a consensus-anchored semantic attestation, making the morpheme a complete reasoning record rather than just a computation log.
Two independent parties computing the same reasoning derive the same morpheme. Verification does not require trust in the agent that produced the reasoning.
What's Ahead
Building toward dynamic rule registries and the OTS/OCS framework. Instead of encoding rules in smart contracts, agents will resolve human-readable rule URIs via registries built on existing Hedera standards (HCS-1 for rule storage, HCS-2 for registries, HCS-13 for schema validation). Designing for community governance of rule taxonomies. This is the direction — not the current implementation.
Hologlass is an AI agent attestation harness that intercepts tool calls, gates execution through human-in-the-loop authorization, and witnesses every decision on Hedera.
Ontologic = consensus about why it was concluded
Hologlass = consensus about who permitted it
Request Early Access
Ontologic is in active pre-Alpha development. Request early access to be notified when developer tooling ships.