Six production-grade engines form the execution layer of the Agent OS. Each engine is configurable at runtime through the Dashboard — no code changes required. Together, they ensure every agent action is verified, compliant, executed, audited, stateful, and grounded in structured knowledge.
The six engines
WHY SIX ENGINES
LLMs generate text. Engines make it trustworthy. Every agent action passes through the Six Engines before, during, and after execution — ensuring that outputs are factually accurate, regulation-compliant, reliably executed, cryptographically audited, business-process-aware, and grounded in organizational knowledge.
Fact verification and hallucination detection. Every claim an agent makes is checked against authoritative sources before it reaches your users or downstream systems.
Real-time compliance enforcement. Regulatory rules are applied deterministically — not by model judgment — to every agent action, every output, every decision.
Task execution and tool-call management. The core runtime that coordinates agent actions, manages tool invocations, and ensures reliable completion of every mission step.
Tamper-proof execution records. Every agent action generates a cryptographic audit entry via AuditChain — a SHA-256 hash-chained log that cannot be altered after the fact.
Business process state management. Maps AI agent workflows to formal business process states — ensuring agents follow your organization's actual operational logic, not just LLM guesswork.
Structured knowledge storage and retrieval. Gives agents access to your organization's structured knowledge — entities, relationships, hierarchies, and domain ontologies — not just unstructured text embeddings.
ENGINE PIPELINE
Every agent action flows through the engines in a coordinated pipeline. The sequence ensures that by the time an action reaches your systems, it has been verified, authorized, and recorded — with full rollback capability if anything fails.