Always-on governance
across your AI fleet

Guardian agents are specialist supervisors that run continuously across every system you've shipped. They enforce policy, score outputs, and catch violations in real time. One coordinated defense. No review queue. No gaps.

Agent ecosystem

One orchestrator coordinates all agents and delivers unified decisions.

KoraSafe Orchestrator

The central nervous system of the KoraSafe Agent ecosystem. It receives governance events, determines which agents to engage, sequences multi-step analysis workflows, and returns a unified response. Every agent interaction flows through the Orchestrator.

Task routing Agent coordination Result synthesis Workflow sequencing Priority management
Learn more

Guardian Agents

KoraSafe PII Sentinel

Real-time PII detection and redaction across all LLM input and output streams. Catches names, SSNs, credit cards, health IDs, and more before data leaves the system boundary.

KoraSafe Bias Watchdog

Discrimination and fairness monitoring for screening, ranking, and decision-making agents. Applies the 4/5 rule and demographic parity analysis to flag disparate impact.

KoraSafe Hallucination Detector

Source grounding validation for conversational and advisory agents. Catches fabricated citations, ungrounded claims, and outputs with no knowledge-base support using LLM-as-Judge verification.

KoraSafe cost controller

System

Budget thresholds and automatic circuit breakers across all API-consuming agents. Detects spend anomalies, enforces rate limits, and triggers automated throttling at budget caps.

KoraSafe autonomy guard

System

Autonomy level enforcement for supervised and fully autonomous agents. Blocks out-of-scope actions, detects unauthorized escalation attempts, and forces approval workflows when boundaries are crossed.

KoraSafe compliance auditor

Multi-framework compliance verification with RAG-powered regulatory context. Monitors all registered agents for compliance drift, policy adherence gaps, and documentation completeness.

Intelligence agents

Deep analysis, classification, and regulatory expertise. Intelligence agents turn raw governance data into structured, actionable insight.

KoraSafe risk assessment

Multi-step risk classification with structured output. Evaluates AI systems across impact dimensions, assigns risk tiers, and generates standardized risk profiles in under sixty seconds.

KoraSafe Knowledge Base

RAG-powered regulatory Q&A with cited sources. Answers governance questions against the EU AI Act, NIST AI RMF, ISO 42001, and other frameworks with full source attribution.

KoraSafe regulatory monitor

Continuously scans for regulatory changes across jurisdictions. Assesses the impact of new regulations on your AI fleet and alerts stakeholders before deadlines hit.

KoraSafe enforcement

Policy evaluation, violation management, and remediation orchestration. Matches detected issues to the right enforcement action and tracks resolution through to closure.

Strategic agents

Board-level analysis, maturity scoring, and long-range planning. Strategic agents turn governance operations into executive advantage.

KoraSafe advisory

Board-level strategic analysis and scenario planning. Synthesizes governance posture, risk trends, and regulatory trajectory into executive-ready recommendations and what-if analyses.

KoraSafe governance maturity

Maturity radar scoring across governance dimensions. Benchmarks your organization against industry standards and highlights the highest-impact areas for improvement.

KoraSafe audit

Automated evidence generation and audit package assembly. Produces timestamped, regulator-ready documentation bundles from your governance activity with zero manual effort.

KoraSafe compliance roadmap

Phased remediation planning based on gap analysis. Generates prioritized, time-bound action plans that map directly to regulatory requirements and maturity targets.

Integration agent

Governance that connects to where your teams already work.

KoraSafe integration

System

Connects the KoraSafe Agent ecosystem to your existing tools. Pushes alerts and summaries to Slack, creates tickets in Jira and Linear, gates CI/CD pipelines on governance status, and fires webhooks for custom automation.

Slack Jira Linear CI/CD Gates Webhooks

Autonomy tiers

Every KoraSafe Agent operates at an autonomy level you control. Dial agents up or down based on your risk appetite and trust maturity.

Default

Observe

Agent monitors your AI fleet and reports findings. No actions taken. You review every alert and decide what happens next.

Guided

Recommend

Agent proposes specific actions with supporting evidence. A human reviews and approves before anything executes.

Trusted

Act

Agent executes within pre-approved boundaries you define. Actions are logged and auditable. Override is always one click away.

How agent routing works

From request to response, here is what happens inside the KoraSafe Orchestrator.

Intent classification and routing

When a request arrives, the KoraSafe Orchestrator sends it for intent classification. The classifier analyzes the message, determines what the user needs, and selects which agents should handle the request. This routing step ensures every query reaches the right specialist agents without manual intervention.

Parallel execution via Promise.allSettled

Multiple Guardian agents can scan the same input simultaneously. The Orchestrator dispatches agents in parallel using Promise.allSettled, so a PII check, bias scan, and hallucination detection all run at the same time. If one agent fails, the others still complete and return their results.

Response synthesis

When multiple agents are invoked, their individual responses are synthesized into a single coherent answer. The user sees one unified response, not a disjointed list of agent outputs. Sources and suggested actions from each agent are preserved in the merged result.

BaseAgent run() pattern

Every agent follows the same BaseAgent run() contract: it receives the user message plus organization context, and returns a structured response containing the answer text, source citations, and recommended actions. This consistent interface makes it straightforward to add new agents to the ecosystem.

Four autonomy tiers

Every KoraSafe agent operates at one of four autonomy tiers. You control which tier each agent runs at, and you can change it at any time.

Tier 1
Observe
Monitors and reports only. No actions taken.
Tier 2
Advise
Proposes actions with evidence. Human decides.
Tier 3
Supervised action
Executes with human approval gates.
Tier 4
Full autonomy
Independent within defined boundaries.

From intake to signed evidence

Planner classifies the request, dispatches the right specialists in parallel, collects typed findings, and hands a signed pack to audit.

Step 01

Intake, classify and route

Classify the request, load policy, route by capability.

Incoming MCP or REST call lands on the orchestrator. Policy Router reads tenant, tier claim, and target system from the registry, looks up the Rego policy pack, and resolves the DAG. Nothing dispatches until the plan is emitted. A human-readable plan is available at plan.get() before any agent runs.

Latency
p50 26ms, p99 48ms
Policy pack
Rego, hot-reload under two seconds
Plan
JSON DAG, signed
Audit
req_id issued at this step
Step 02

Dispatch

Launch the specialists named in the plan. Each agent gets its scope, budget, and deadline up front.

Step 03

Collect

Gather typed findings from every agent in parallel. Merge, de-duplicate, attach knowledge-graph citations.

Step 04

Audit

Hand a signed evidence pack to the append-only audit log. Stable request id, Merkle hash, seven-year retention.

Authority your team assigns

Every agent operates at a tier you set, per tenant, per capability. The first three tiers keep a human in the loop by design. Automation runs only where you have signed off that it can.

Tier 01, advise

Agent suggests, human executes

read-only, p99 300ms

Read-only analysis. No direct actions, no side effects, no tool calls that mutate state. The agent becomes a very fast researcher, citing knowledge-graph nodes (node_id:eu-ai-act:annex-iii:1b, nist-ai-rmf:govern-1.5) in every finding. Findings render in the UI with a click-through citation and a confidence score. Default tier for sensitive contexts.

Action class
read-only
Latency SLA
p99 300ms
Citation
node_id required
Audit
query logged

Use it for EU AI Act Annex III novel use cases and any context with a legally required human.

Tier 02, assist

Agent drafts, human approves each step

dual-control, SR 11-7 ready

Tier 03, delegate

Agent acts within guardrails

Rego scope, five percent review

Tier 04, automate

Agent runs end to end

break-glass audited