Guardian agents are specialist supervisors that run continuously across every system you've shipped. They enforce policy, score outputs, and catch violations in real time. One coordinated defense. No review queue. No gaps.
One orchestrator coordinates all agents and delivers unified decisions.
The central nervous system of the KoraSafe Agent ecosystem. It receives governance events, determines which agents to engage, sequences multi-step analysis workflows, and returns a unified response. Every agent interaction flows through the Orchestrator.
Real-time PII detection and redaction across all LLM input and output streams. Catches names, SSNs, credit cards, health IDs, and more before data leaves the system boundary.
Discrimination and fairness monitoring for screening, ranking, and decision-making agents. Applies the 4/5 rule and demographic parity analysis to flag disparate impact.
Source grounding validation for conversational and advisory agents. Catches fabricated citations, ungrounded claims, and outputs with no knowledge-base support using LLM-as-Judge verification.
Budget thresholds and automatic circuit breakers across all API-consuming agents. Detects spend anomalies, enforces rate limits, and triggers automated throttling at budget caps.
Autonomy level enforcement for supervised and fully autonomous agents. Blocks out-of-scope actions, detects unauthorized escalation attempts, and forces approval workflows when boundaries are crossed.
Multi-framework compliance verification with RAG-powered regulatory context. Monitors all registered agents for compliance drift, policy adherence gaps, and documentation completeness.
Deep analysis, classification, and regulatory expertise. Intelligence agents turn raw governance data into structured, actionable insight.
Multi-step risk classification with structured output. Evaluates AI systems across impact dimensions, assigns risk tiers, and generates standardized risk profiles in under sixty seconds.
RAG-powered regulatory Q&A with cited sources. Answers governance questions against the EU AI Act, NIST AI RMF, ISO 42001, and other frameworks with full source attribution.
Continuously scans for regulatory changes across jurisdictions. Assesses the impact of new regulations on your AI fleet and alerts stakeholders before deadlines hit.
Policy evaluation, violation management, and remediation orchestration. Matches detected issues to the right enforcement action and tracks resolution through to closure.
Board-level analysis, maturity scoring, and long-range planning. Strategic agents turn governance operations into executive advantage.
Board-level strategic analysis and scenario planning. Synthesizes governance posture, risk trends, and regulatory trajectory into executive-ready recommendations and what-if analyses.
Maturity radar scoring across governance dimensions. Benchmarks your organization against industry standards and highlights the highest-impact areas for improvement.
Automated evidence generation and audit package assembly. Produces timestamped, regulator-ready documentation bundles from your governance activity with zero manual effort.
Phased remediation planning based on gap analysis. Generates prioritized, time-bound action plans that map directly to regulatory requirements and maturity targets.
Governance that connects to where your teams already work.
Connects the KoraSafe Agent ecosystem to your existing tools. Pushes alerts and summaries to Slack, creates tickets in Jira and Linear, gates CI/CD pipelines on governance status, and fires webhooks for custom automation.
Every KoraSafe Agent operates at an autonomy level you control. Dial agents up or down based on your risk appetite and trust maturity.
Agent monitors your AI fleet and reports findings. No actions taken. You review every alert and decide what happens next.
Agent proposes specific actions with supporting evidence. A human reviews and approves before anything executes.
Agent executes within pre-approved boundaries you define. Actions are logged and auditable. Override is always one click away.
From request to response, here is what happens inside the KoraSafe Orchestrator.
When a request arrives, the KoraSafe Orchestrator sends it for intent classification. The classifier analyzes the message, determines what the user needs, and selects which agents should handle the request. This routing step ensures every query reaches the right specialist agents without manual intervention.
Multiple Guardian agents can scan the same input simultaneously. The Orchestrator dispatches agents in parallel using Promise.allSettled, so a PII check, bias scan, and hallucination detection all run at the same time. If one agent fails, the others still complete and return their results.
When multiple agents are invoked, their individual responses are synthesized into a single coherent answer. The user sees one unified response, not a disjointed list of agent outputs. Sources and suggested actions from each agent are preserved in the merged result.
Every agent follows the same BaseAgent run() contract: it receives the user message plus organization context, and returns a structured response containing the answer text, source citations, and recommended actions. This consistent interface makes it straightforward to add new agents to the ecosystem.
Every KoraSafe agent operates at one of four autonomy tiers. You control which tier each agent runs at, and you can change it at any time.
Planner classifies the request, dispatches the right specialists in parallel, collects typed findings, and hands a signed pack to audit.
Classify the request, load policy, route by capability.
Incoming MCP or REST call lands on the orchestrator. Policy Router reads tenant, tier claim, and target system from the registry, looks up the Rego policy pack, and resolves the DAG. Nothing dispatches until the plan is emitted. A human-readable plan is available at plan.get() before any agent runs.
Launch the specialists named in the plan. Each agent gets its scope, budget, and deadline up front.
Gather typed findings from every agent in parallel. Merge, de-duplicate, attach knowledge-graph citations.
Hand a signed evidence pack to the append-only audit log. Stable request id, Merkle hash, seven-year retention.
Every agent operates at a tier you set, per tenant, per capability. The first three tiers keep a human in the loop by design. Automation runs only where you have signed off that it can.
read-only, p99 300ms
Read-only analysis. No direct actions, no side effects, no tool calls that mutate state. The agent becomes a very fast researcher, citing knowledge-graph nodes (node_id:eu-ai-act:annex-iii:1b, nist-ai-rmf:govern-1.5) in every finding. Findings render in the UI with a click-through citation and a confidence score. Default tier for sensitive contexts.
Use it for EU AI Act Annex III novel use cases and any context with a legally required human.
dual-control, SR 11-7 ready
Rego scope, five percent review
break-glass audited