AI Governance Intelligence
for the Agentic Era
This guide covers every module, workflow, and concept you need to govern your AI fleet with confidence.
KoraSafe is an AI governance intelligence platform that helps enterprise teams classify risk, register AI systems, enforce policies, and maintain continuous compliance across multi-framework regulatory environments. Six specialized Guardian Agents monitor your AI fleet around the clock. KoraSafe, the AI assistant, answers governance questions with cited regulatory sources.
Who This Guide Is For
Governance Leads
Set up your AI registry, run assessments, and generate audit-ready documentation for legal and regulators.
AI / ML Engineers
Register systems, configure enforcement policies, and connect KoraSafe to CI/CD pipelines via MCP API.
Risk and Compliance
Track maturity scoring, manage RACI matrices, export checklists, and monitor violation dashboards.
No prior compliance expertise required
KoraSafe's assessment engine and KoraSafe assistant guide you through regulatory requirements step by step. You describe your AI system; the platform handles the classification and gap analysis.
Quick Start
Get from zero to a full governance posture in four steps. Most teams complete the initial setup in under an hour.
Create Your Organization and Invite Your Team
Navigate to Admin > Organization. Set your organization name, industry, and preferred jurisdictions. Then go to Admin > Users and send token-based invitations to colleagues. Assign roles based on their responsibilities: Owners control org settings, Admins manage systems, Analysts run assessments, and Viewers have read-only access.
Register Your AI Systems
Go to AI Registry and click Add System. Fill in the name, system type, deployment domain, underlying model, and assigned owner. Set the autonomy level: Observe, Advise, Supervised Action, or Full Autonomy. Each registered system becomes a governed asset tracked across all modules.
Run Your First Risk Assessment
Navigate to Risk Assessment and select a registered system. Work through the five-step guided workflow: agent type, industry, data categories, affected populations, and jurisdictions. The assessment engine classifies the system under EU AI Act, GDPR, and relevant US state laws within about a minute. Download the resulting Assessment Report, Technical Documentation, or Governance Roadmap.
Activate Guardian Agents
Go to Enforcement > Guardian Controls. Enable the guardians relevant to your systems. PII Sentinel and Hallucination Detector are recommended for all deployments. Set each guardian's autonomy level: Observe (report only), Propose (request human approval), or Act (execute within boundaries). Guardian activity streams into the Enforcement violation log automatically.
No-code quick assessment available at korasafe.ai
The homepage offers an instant risk assessment without signing in. Describe your AI system, answer the guided questions, and get a risk classification with regulatory citations in under 60 seconds. Use this for a fast first look before setting up your full organization account.
Core Concepts
Key ideas that underpin how KoraSafe works. Understanding these makes every module easier to use.
Risk Tiers
Every AI system in KoraSafe is classified into one of four risk tiers under the EU AI Act. The tier determines the controls, documentation, and monitoring required.
Autonomy Levels
Every registered AI system is assigned an autonomy level that defines the governance controls it requires and how Guardian Agents enforce boundaries.
Level 0: Read, log, and report
The system monitors and produces output but takes no external actions. Recommended for analytics and reporting tools. Lowest enforcement overhead.
Level 1: Recommend and draft
The system generates recommendations or drafts content for human review. A human approves before anything is executed. Standard for decision-support tools.
Level 2: Act within pre-approved boundaries
The system executes within a defined scope you control. All actions are logged and auditable. Override is available at any time. Appropriate for workflow automation.
Level 3: Full execution authority
The system acts independently within its domain. Requires the strictest governance controls: continuous guardian monitoring, complete audit trail, and Human-in-the-Loop approval gates for out-of-scope actions.
The Six Pillars of AI Governance
KoraSafe measures governance maturity across six pillars. Each pillar has four to five controls that map to regulatory requirements and industry standards.
Regulatory Frameworks
| Framework | Jurisdiction | Scope | Coverage in KoraSafe |
|---|---|---|---|
| EU AI Act | European Union | AI systems placed on EU market | Full: risk tiers, Annex III, prohibited practices |
| GDPR | EU / EEA | Processing of personal data | Full: Art. 22, Art. 35 DPIA, lawful basis, DPAs |
| Colorado SB 205 | Colorado, US | High-risk algorithmic systems | Impact assessment, bias auditing requirements |
| NYC Local Law 144 | New York City | Automated employment decisions | Bias audit requirements for hiring tools |
| CCPA / CPRA | California, US | Consumer data rights | Data rights workflows, opt-out mechanisms |
| NIST AI RMF | US (voluntary) | AI risk management | Maturity framework alignment |
| ISO 42001 | International | AI management systems | Governance maturity mapping |
Risk Assessment
Multi-framework risk classification. Evaluate AI systems across EU AI Act, GDPR, and US state laws with cited regulatory sources. Results in under 60 seconds.
Running an Assessment
Navigate to Risk Assessment and select a registered AI system (or start a new one from the landing page). The assessment works through five sections:
Step 1: Agent type
Select all functions that apply to your AI system. Most agents serve multiple functions, and each one can trigger different regulatory requirements. For example, a hiring-screening agent that also uses biometric recognition triggers both Employment (Annex III, 4a) and Biometric Categorization (Annex III, 1a) high-risk categories under the EU AI Act simultaneously.
Step 2: Industry
Select the industry your agent serves, not the industry your company operates in. A fraud-detection agent for a bank operates in Financial Services even if the vendor builds fintech software.
Step 3: Data categories
Select every data type the system accesses, processes, or generates. Health data triggers GDPR special categories (Art. 9). Biometric data can trigger EU AI Act prohibitions. Select all that apply; the engine resolves overlapping requirements automatically.
Be comprehensive on data categories
Underreporting data types is the most common cause of under-classification. If your system even incidentally processes a data type, select it. The engine will note it in context rather than misclassifying the system as lower risk.
Step 4: Affected Populations
Include everyone your system makes decisions about, interacts with, or communicates to. Vulnerable groups (minors, patients, people with disabilities) trigger stricter requirements under both EU AI Act and GDPR. Select all groups that may be reached, not just the primary intended users.
Step 5: Jurisdictions and Existing Controls
Select where your system operates or serves users. Then work through the governance posture section: for each practice (e.g., "Human review before consequential decisions," "Bias testing in production"), select your current status. Honest answers produce better recommendations.
Assessment Output
Risk Classification
EU AI Act tier, applicable Annex III category, GDPR applicability, US state law findings, each with specific article citations.
Compliance Score
A 0 to 100 readiness score derived from your governance posture answers, risk tier, and agent autonomy level.
Governance Gaps
Prioritized list of missing controls: Critical, High, Medium, and Low, with the specific regulation requiring each one.
Exportable Reports
Assessment Report, Technical Documentation, and Governance Roadmap, all ready for legal teams, auditors, or investors.
Follow-Up Q&A
After the assessment completes, a chat interface appears at the bottom of the results page. Ask KoraSafe follow-up questions grounded in your specific assessment context, for example, "What does Art. 9(2)(g) mean for my health-data system?" or "Which controls address the DPIA requirement?" Answers are cited against the regulatory knowledge base.
| Report type | Contents | Typical use |
|---|---|---|
| Assessment Report | Risk classification, findings, regulatory citations, gap summary | Legal team, DPO, board |
| Technical Documentation | System description, data flows, control inventory, risk register | Conformity assessment, regulatory filing |
| Governance Roadmap | Prioritized remediation steps, timelines, responsible owners | Engineering and compliance sprint planning |
AI Registry
The single source of truth for every AI system in your organization. Catalog, classify, and track lifecycle status across your entire AI fleet.
Adding a System
Click Add System in the AI Registry. The minimum required fields are:
- Name, a clear, internal identifier (e.g., "Customer Churn Predictor v2")
- System type, classification, prediction, generative, agentic, or hybrid
- Domain, the business area or industry the system serves
- Underlying model, the model family or vendor (e.g., GPT-4o, Llama 3, internal model)
- Owner, the team member accountable for this system's governance
- Autonomy level, Observe, Advise, Supervised, or Autonomous
- Lifecycle status, Development, Staging, Production, Deprecated
Fleet View
The fleet view shows all registered systems in a sortable, filterable table. Filter by risk classification, autonomy level, lifecycle status, or owner. Click any system to open its detail view.
System Detail Tabs
| Tab | Contents |
|---|---|
| Overview | Risk score, autonomy classification, compliance readiness percentage, last assessment date |
| Governance | Six-pillar status heatmap, maturity scores, dimension-by-dimension breakdown |
| Enforcement | Active guardrails, recent violations, Guardian Agent trigger counts |
| History | Timestamped changelog: assessments run, policy changes, ownership transfers, status changes |
Deprecate, don't delete
When an AI system is retired, set its lifecycle status to Deprecated rather than deleting it. This preserves the audit trail and governance history for regulatory purposes. Regulators may request historical records for systems no longer in use.
Governance
Track governance maturity across Six Pillars, score your program against industry benchmarks, and manage the Eval-Driven Development pipeline.
Governance Heatmap
The heatmap displays a matrix of all registered AI systems (rows) against the governance dimensions (columns). Color coding shows status at a glance:
Not Implemented
No control exists for this dimension. Indicates a gap that needs remediation.
In Progress
Control is partially implemented. Review what remains to reach compliance.
Implemented
Control is in place and documented. Eligible to be cited in audit evidence.
Maturity Radar
The maturity radar scores your organization across seven governance dimensions against a five-level scale:
| Level | Name | Description |
|---|---|---|
| 1 | Initial | Ad-hoc processes, no formal governance documentation |
| 2 | Developing | Some documented processes, inconsistent application |
| 3 | Defined | Standardized processes documented and followed consistently |
| 4 | Managed | Processes measured, monitored, and continuously improved |
| 5 | Optimized | Proactive governance, benchmarked against industry standards |
Agent Evals (EDD Pipeline)
The Eval-Driven Development (EDD) pipeline applies a quality gate to AI systems before they reach production. Four stages:
Evals are scored across six weighted dimensions: Accuracy, Bias, Hallucination Rate, Safety, Compliance, and Latency. A composite score must exceed the configured threshold before the Gate stage passes.
Connect evals to CI/CD via MCP API
The EDD Gate stage can be wired into your CI/CD pipeline. A failed gate blocks deployment automatically. See the MCP API section for integration details.
Enforcement
Policy engine, violation management, and Guardian Agent controls. Define what your AI systems can and cannot do, then enforce it automatically.
Policy Types
KoraSafe supports seven enforcement policy types. Policies apply to individual systems or system groups:
| Policy type | What it does | Typical use |
|---|---|---|
| Input filter | Blocks or transforms input before it reaches the AI system | Strip PII from prompts, block prompt-injection patterns |
| Output filter | Blocks or transforms AI output before it reaches users or downstream systems | Redact SSNs, block hallucinated citations, remove toxic content |
| Approval workflow | Routes consequential decisions to a human approver before execution | High-stakes hiring decisions, loan denials, medical recommendations |
| Circuit breaker | Halts system operation when a threshold is breached. Acts as an emergency kill switch for individual agents or the entire fleet. | Budget cap reached, error rate spike, anomalous output volume |
| Pre-deployment gate | Blocks deployment until governance conditions are met | Require passing eval score, completed DPIA, or risk assessment before go-live |
| Rate limiting | Limits request volume or output frequency per time window | Prevent runaway agents, control API spend, enforce fair-use policies |
| Trust scoring | Assigns trust scores to agent interactions, restricts low-trust actions. Enables progressive autonomy through tier graduation as agents demonstrate compliant behavior. | Restrict autonomous agents from high-impact actions until trust is established |
Violation Management
When a guardian agent or policy rule detects a violation, it appears in the Violations tab with severity, timestamp, affected system, and the specific rule triggered. Filter by severity or status. Resolve violations through the admin workflow, each resolution is timestamped and logged to the immutable audit trail.
Critical
Immediate action required. Probable regulatory breach or prohibited practice detected. May trigger circuit breaker automatically.
High
Significant governance gap. Address within 24 to 48 hours to maintain compliance posture.
Medium
Governance gap that should be remediated in the current sprint or cycle.
Low
Minor deviation. Log for awareness and address in next review cycle.
Guardian Controls Panel
Navigate to Enforcement > Guardian Controls to activate, pause, or configure individual Guardian Agents. Each guardian shows its trigger count, last-active timestamp, and current autonomy level. You can adjust each guardian's autonomy level independently, a guardian set to Observe reports findings without taking action, while one set to Act can execute remediation automatically.
Guardian Agents
Six specialized AI agents that monitor your fleet around the clock. They detect violations and enforce policies in real time.
How Guardians Work
Each guardian operates as a specialist agent that evaluates inputs, outputs, or system behavior against its specialized detection rules. Results are returned as structured findings with severity, evidence, and recommended remediation. At autonomy level Observe, findings are logged only. At Propose, the guardian creates a pending resolution that a human approves. At Act, the guardian executes remediation directly and logs the action.
Start with PII Sentinel and Hallucination Detector
For most teams, the two highest-value guardians to activate first are PII Sentinel (catches data leakage immediately) and Hallucination Detector (prevents fabricated regulatory citations from reaching users). Set both to Observe initially to understand your baseline before enabling enforcement actions.
PII Types Detected
| PII type | Severity | Examples |
|---|---|---|
| Social Security Number | Critical | XXX-XX-XXXX patterns |
| Credit Card Number | Critical | 16-digit card numbers with Luhn validation |
| Passport Number | Critical | Country-specific passport formats |
| Email Address | High | user@domain.com patterns |
| Phone Number | High | US and international formats |
| Physical Address | High | Street address with city/state/ZIP |
| Medical Record Number | High | MRN patterns in healthcare context |
| Full Name | Medium | First + last name combinations |
| Date of Birth | Medium | Date strings in PII context |
| IP Address | Low | IPv4 and IPv6 patterns |
KoraSafe AI Assistant
Your always-on governance teammate. Ask about regulatory requirements, policy gaps, or compliance status in plain language. Every answer is cited against the regulatory knowledge base.
What KoraSafe can answer
- "Does my healthcare scheduling agent fall under EU AI Act Annex III?"
- "What does Art. 22 GDPR require for automated decision-making?"
- "Which of my registered systems are missing a DPIA?"
- "What's the difference between Colorado SB 205 and NYC Local Law 144?"
- "How do I remediate the Human Oversight gap flagged in my last assessment?"
Knowledge Base
KoraSafe's answers are grounded in KoraSafe's regulatory knowledge base, a continuously updated corpus covering:
EU AI Act
Full text with Articles, Recitals, and Annexes I through XIII. Updated to reflect official amendments and guidelines.
GDPR
All 99 articles, 173 recitals, and supervisory authority guidance documents.
US State Laws
Colorado SB 205, NYC Local Law 144, CCPA/CPRA, and additional US state AI legislation.
Enforcement Actions
Real DPA decisions, FTC actions, and court judgments. Grounds KoraSafe's risk analysis in actual enforcement patterns.
Citing Sources
Every KoraSafe response includes source citations, the specific document, article, or section the answer draws from. Citations appear as footnotes in the answer. Click a citation to view the full source text. If KoraSafe cannot find regulatory support for a claim, it says so rather than speculating.
Assessment Follow-Up Mode
After completing a risk assessment, the KoraSafe interface enters assessment context mode. Questions asked here are answered with awareness of your specific system's profile, risk tier, and governance posture. For example, asking "What do I need to fix first?" yields a prioritized, system-specific remediation list rather than generic guidance.
Data isolation
KoraSafe operates within your organization's data boundary. Conversations and context are not shared across organizations. KoraSafe cannot access other tenants' systems, assessments, or governance data.
Checklist and RACI
Track completion status across all governance pillars. Define accountability with an editable RACI matrix. Export everything as CSV or PDF for auditors.
Compliance Checklist
The checklist organizes 24 core governance controls across the Six Pillars. Each control has a status: Done, In Progress, or Not Started. Progress bars at the top of each pillar show overall completion.
Controls are mapped to regulatory requirements, so checking one off automatically updates the compliance posture displayed in the Governance heatmap and assessment results.
RACI Matrix
The RACI matrix defines accountability for every governance control. The four roles:
| Role | Meaning | Typical assignment |
|---|---|---|
| R, Responsible | Does the work | AI Engineer, Data Scientist |
| A, Accountable | Final decision-maker, signs off | Product Owner, Head of AI |
| C, Consulted | Provides input and expertise | Legal, DPO, Security |
| I, Informed | Kept up to date on progress | Board, Compliance Committee |
Edit any cell to assign a team member to a role. Changes are saved automatically and logged to the audit trail. Export the completed RACI as PDF for regulatory submissions or board reporting.
Exporting for Audits
Both the checklist and RACI matrix export to CSV and PDF. The PDF export includes:
- Organization name and export timestamp
- Overall completion percentage per pillar
- Control status with last-updated dates
- Assigned responsible and accountable owners
Build your audit package incrementally
Export the checklist and RACI alongside the Assessment Report and Technical Documentation. Regulators and auditors typically ask for these four artifacts as a baseline package for AI governance reviews.
Integrations
Connect KoraSafe to the tools your team already uses. Push governance alerts to Slack, create tickets in Jira, gate CI/CD pipelines, and ingest regulatory documents.
Connected Services
Slack
Guardian Agent alerts, violation notifications, and weekly governance summaries delivered to your chosen channels.
Jira / Linear
Violations and governance gaps automatically create tickets with priority, description, and regulatory context attached.
CI/CD Pipelines
Gate deployments on governance status via the MCP API. Block production pushes when EDD Gate fails or DPIA is missing.
Cloud Registries
Import AI system metadata from AWS SageMaker, Azure ML, GCP Vertex, and shared drives to auto-populate the AI Registry.
Regulatory Feeds
Automatic ingestion of new regulatory documents. KoraSafe's knowledge base stays current without manual uploads.
Webhooks
Fire custom webhooks on any platform event, violation detected, assessment completed, budget threshold crossed.
Document Ingestion
Import regulatory documents into the knowledge base from Integrations > Document Ingestion. Provide the document title, full text or URL, category (law, guidance, enforcement action, framework), and jurisdiction. KoraSafe AI-indexes the document and makes it available to KoraSafe for citations within a few minutes.
Setting Up Slack
Create a Slack Webhook
In your Slack workspace, go to Apps > Incoming Webhooks and create a new webhook URL for your #ai-governance channel.
Paste the Webhook URL Into KoraSafe
In Integrations > Connected Services, click Add Slack and paste the webhook URL.
Select Notification Types
Choose which events trigger Slack messages: Critical violations only, all violations, weekly summaries, or budget alerts.
Admin and Settings
Organization management, user access, SSO, API keys, and security configuration.
Roles and Permissions
| Role | Capabilities |
|---|---|
| Owner | Full access including billing, org deletion, SSO configuration, and role assignment |
| Admin | Add/remove users, manage AI systems, configure policies, create API keys |
| Analyst | Run assessments, view all systems, export reports, resolve violations |
| Viewer | Read-only access to all governance data. Cannot modify systems or run assessments |
Inviting Users
Navigate to Admin > Users > Invite. Enter the email address and select a role. KoraSafe sends a token-based invitation link valid for 48 hours. The invitee clicks the link, completes account setup, and is added to your organization automatically.
SSO and MFA
KoraSafe supports SAML 2.0 and OIDC for Single Sign-On. Configure your identity provider (Okta, Azure AD, Google Workspace) in Admin > Security > SSO. Provide your IdP metadata URL or upload the XML metadata file. Once SSO is configured, you can enforce it organization-wide, new users must authenticate through your IdP.
Multi-factor authentication can be enforced for all users or specific roles from Admin > Security > MFA. TOTP (authenticator app) and SMS are supported.
API Key Management
Create scoped API keys from Admin > API Keys. Each key can be scoped to specific endpoints (e.g., read-only access to assessments, or write access to the AI Registry). Rotate or revoke keys at any time. All key usage is logged with timestamps, IP addresses, and endpoint calls to the audit trail.
Feature Flags
Owners can enable or disable platform features per organization from Admin > Feature Flags. This allows gradual rollout of new modules to your team, or disabling features not yet ready for your governance workflow.
Danger zone
The Danger Zone section in Admin > Organization contains irreversible actions: exporting all data, transferring organization ownership, and deleting the organization. Deletion is permanent, export your audit data first.
MCP API
Model Context Protocol endpoint for agent-to-agent governance. Query the knowledge base, run assessments, and access compliance data programmatically.
What is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI agents to external data sources and tools. KoraSafe's MCP endpoint lets AI orchestrators query compliance data, run assessments, and enforce governance policies without human intervention, enabling fully automated governance pipelines.
Agent Card
KoraSafe publishes an A2A (Agent-to-Agent) agent card at /.well-known/agent.json. Other AI agents use this card to discover KoraSafe's capabilities and establish governed connections.
Available MCP Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/assess | POST | Run a full multi-framework risk assessment programmatically |
/api/chat | POST | Query KoraSafe with a natural language governance question |
/api/query | POST | Semantic search over the regulatory knowledge base |
/api/documents | GET/POST | List and ingest regulatory documents |
/api/ingest | POST | Ingest a new regulatory document into the knowledge base |
/api/health | GET | Platform health check for uptime monitoring |
/api/guardian-scan | POST | Run a guardian agent scan against a text payload |
/api/ai-systems | GET/POST | Read and write AI Registry entries |
Authentication
All MCP API calls require a bearer token. Create a scoped API key in Admin > API Keys and include it in the Authorization header:
CI/CD Integration Example
To gate a deployment on governance status, add a step to your pipeline that calls /api/assess and fails the build if the compliance score is below your threshold:
POST /api/assess
{
"systemId": "sys_abc123",
"failThreshold": 70
}
# Returns 200 if score >= threshold, 422 if below
MCP Dashboard
Monitor MCP server activity, request volume, error rates, and latency from the MCP Dashboard at /dashboard. Use this to verify integrations are working and debug failed requests.
Code audit
Scan agent source code for governance violations across every surface.
Running a code audit
There are four ways to run a code audit:
- Web platform — Navigate to Insights and Operations > Code Audit. Drag-drop a source file or paste a GitHub URL. The Code Auditor agent scans the code and maps findings to regulatory controls.
- KoraSafe agent bar — Type
/code-auditfollowed by a description (e.g.,/code-audit the hiring agent). Findings render as structured cards with Apply fix, Reject, and Escalate actions. - GitHub Action — Add
korasafe/kora-actionto your workflow. The action runs on every pull request, posts findings as PR comments, and creates a Check Run that blocks merges when critical violations are found. - VS Code extension — Diagnostics appear on file save. Open the KoraSafe sidebar for a compliance score ring and severity-grouped findings.
Audit findings dashboard
Navigate to Insights and Operations > Audit Findings to see all findings across surfaces in a unified view. Filter by severity (critical, high, medium, low), category (PII, HITL, secrets, data handling), status (open, acknowledged, resolved), and source (web, CI/CD, IDE, browser).
Select multiple findings and use bulk status update to acknowledge or resolve them in a single action.
Slash commands for audit
/code-audit— Run the Code Auditor agent on an agent or code snippet/deps— Run the Dependency Auditor to scan packages for CVEs and license issues/fix— Generate remediation patches for a specific finding
Policy packs
Versioned governance bundles tied to regulations.
Browsing the catalog
Navigate to Insights and Operations > Policy Packs. The catalog shows available packs organized by regulation (EU AI Act, GDPR, HIPAA, US state laws). Each pack displays its current version, last updated date, and subscriber count.
Subscribing and pinning
Click Subscribe on a pack to begin enforcing its policies across your surfaces. By default, subscriptions auto-update when a new pack version is published. To lock to a specific version, toggle Pin version — this ensures your policies remain stable during critical periods.
Human review gate
When a regulation is amended and a pack version is bumped, the update is held for human review before enforcement begins. An admin must approve the new version in the review queue. This prevents untested regulatory changes from reaching production.
Surface distribution
Each policy can be toggled per surface: web platform, CI/CD, IDE, and browser. This lets you enforce a policy in CI/CD (blocking deployments) while leaving it in advisory mode in the IDE (showing diagnostics without blocking saves).
Extensions
KoraSafe governance runs inside your IDE, browser, and CI/CD pipeline.
VS Code extension
Install the .vsix package from Extensions > Install from VSIX in VS Code. Once installed:
- Diagnostics on save — Squiggly underlines appear on governance violations (PII, missing HITL gates, hardcoded secrets) every time you save a file.
- Sidebar — The KoraSafe sidebar (shield icon) shows a compliance score ring and findings grouped by severity.
- Quick fixes — Click the lightbulb on a diagnostic to apply a one-click fix.
- Hover tooltips — Hover over a finding to see which regulation and article it maps to.
- Commands —
KoraSafe: Scan File,KoraSafe: Scan Workspace,KoraSafe: Set API Keyfrom the command palette.
Chrome browser extension
Load the extension unpacked from extensions/chrome/ in chrome://extensions. The extension monitors AI chat interfaces and LLM API calls:
- Network interception — Detects API calls to OpenAI, Anthropic, Azure OpenAI, Google AI, Cohere, and Bedrock endpoints.
- Shadow AI detection — Identifies when team members use ChatGPT, Gemini, Copilot, or other AI tools on unauthorized pages.
- PII scanning — Scans text entered into AI chat inputs for SSNs, email addresses, phone numbers, and other PII before it leaves the browser.
- Side panel — Click the extension icon to open the side panel with Summary, Findings, and Timeline tabs.
GitHub Action
Add the KoraSafe governance action to any repository:
on: [pull_request]
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: korasafe/kora-action@main
with:
api-key: ${{ secrets.KORASAFE_API_KEY }}
fail-on: critical
The action collects changed files in the PR, submits them to the Code Auditor and Dependency Auditor agents, and posts a findings summary as a PR comment. Set fail-on to critical, high, or medium to control the merge-blocking threshold.
GitLab CI
Include the reusable template in your .gitlab-ci.yml:
- remote: 'https://korasafe.ai/ci/.korasafe-ci.yml'
JS and Python SDKs
For programmatic access, use the official SDKs:
import { KoraSafe } from '@korasafe/sdk';
const kora = new KoraSafe({ apiKey: 'ks_...' });
const findings = await kora.audit.findings({ severity: 'critical' });
# Python
from korasafe import KoraSafe
kora = KoraSafe(api_key="ks_...")
findings = kora.audit.findings(severity="critical")
FinOps
Monitor, allocate, and optimize LLM spend across your organization.
Cost center management
Navigate to Insights and Operations > FinOps. Create cost centers by team, project, or use case. Allocate monthly budgets and track actual spend against each center in real time.
Budget alerts
Set threshold-based alerts (e.g., "alert when 80% of budget consumed"). Alerts route through the severity-based notification system — critical budget breaches trigger Slack and email notifications.
Cost-per-action tracking
Break down LLM spend by action type: KoraSafe queries, risk assessments, guardian scans, code audits, and document ingestion. Identify which workflows consume the most tokens and optimize accordingly.
Reports
- Usage forecast — Project future spend based on current trends to plan capacity.
- Chargeback report — Allocate AI costs to business units for internal billing.
- Value report — Quantify governance ROI: compliance gaps closed, findings remediated, audit hours saved.
System health
Real-time platform monitoring and diagnostics.
Navigate to Insights and Operations > System Health. The dashboard shows:
- Service probes — Every 5 minutes, health probes check the database, auth provider, LLM endpoints, and integrations. Results display as green (operational), amber (degraded), or red (down).
- Error log — Searchable log with structured error codes, request IDs, and timestamps. Stack traces and internal paths are never exposed to API clients.
- Endpoint health — Per-endpoint latency and error rate metrics. Spot degraded routes before they impact users.
- Database health — Connection pool utilization, query performance, and table size monitoring.
- SLA report — Track availability, response times, and governance finding resolution windows against your SLA targets.
Alerts and notifications
Severity-based routing with SLA tracking across Slack, email, and in-app channels.
How alert routing works
When a governance finding is detected, KoraSafe routes the alert based on severity:
| Severity | Channels | Response SLA | Example |
|---|---|---|---|
| Critical | Slack DM + email + platform banner | 1 hour | PII in production response, CVE with active exploit |
| High | Slack channel + email digest | 24 hours | Missing HITL gate, high-risk EU AI Act classification |
| Medium | Platform notification + weekly digest | 1 week | Outdated dependency, missing error handling |
| Low | Platform notification only | Next sprint | Code style violation, missing model card field |
Critical alert banner
When critical or high-severity alerts are active, a persistent banner appears at the top of the platform. The banner polls every 60 seconds and links directly to the audit findings dashboard. Banners can be dismissed per session but reappear until the underlying alert is resolved.
Notification preferences
Navigate to your profile or Administration > Notification Preferences. Toggle delivery channels (in-app, email, Slack) independently for each category:
- Governance — Risk assessments, compliance gaps, policy violations, governance score changes
- Agents — Pending approvals, trust tier changes, circuit breakers, error spikes
- Security — Failed logins, new device access, role changes, API key expiry
- Usage — Approaching limits, budget thresholds, overages, plan renewals
- System — Service degradation, error rate spikes, SLA breaches
- Team — New members, invitations accepted, role changes
Preferences are saved per user and per organization.
Alert rules engine
Admins can define custom alert rules with a metric, operator, threshold, cooldown period, and channel routing. Rules are evaluated every 5 minutes by the cron engine. Stale alerts (open for more than 24 hours with no recurrence) are auto-resolved.
SLA compliance
Every alert tracks first_detected_at, acknowledged_at, and resolved_at. The SLA compliance endpoint (/api/alerts/sla) returns breach rates per severity tier, helping teams identify bottlenecks in their remediation workflows.
Glossary
Key terms used throughout KoraSafe and in AI governance regulation.
| Term | Definition |
|---|---|
| A2A Protocol | Agent-to-Agent communication standard. Allows AI agents to discover and interact with KoraSafe's governance capabilities programmatically. |
| Annex III | The EU AI Act annex listing eight categories of high-risk AI systems, including employment tools, biometric systems, and critical infrastructure. |
| Autonomy level | A four-tier classification (Observe, Advise, Supervised, Autonomous) that defines an AI system's decision-making independence and the governance controls required. |
| Circuit breaker | An enforcement policy that halts system operation when a threshold is breached, for example, stopping an agent when its budget cap is reached. |
| Compliance score | A 0 to 100 readiness score derived from a system's governance posture, risk tier, and active controls. Used to track progress and gate deployments. |
| DPIA | Data Protection Impact Assessment. Required under GDPR Art. 35 for processing that is "likely to result in a high risk" to individuals, including most AI decision-making systems. |
| EDD pipeline | Eval-Driven Development. A four-stage quality gate (Define, Develop, Gate, Monitor) that applies governance criteria before and after AI deployment. |
| EU AI Act | Regulation (EU) 2024/1689 on artificial intelligence. The world's first comprehensive AI regulation, creating risk-based obligations for AI systems placed on the EU market. |
| Governance heatmap | A visual matrix showing every AI system's status across every governance dimension. Red = not implemented, yellow = in progress, green = done. |
| Guardian agent | A specialized AI agent that continuously monitors your fleet for specific violation types: PII, bias, hallucination, cost overruns, autonomy violations, or compliance drift. |
| HITL | Human-in-the-Loop. A governance pattern requiring human review and approval for AI actions above a defined consequence threshold. |
| MCP | Model Context Protocol. An open standard for connecting AI agents to external data and tools. KoraSafe's MCP endpoint enables programmatic governance. |
| Maturity radar | A spider-chart visualization scoring governance maturity across seven dimensions on a five-level scale from Initial to Optimized. |
| NIST AI RMF | National Institute of Standards and Technology AI Risk Management Framework. A voluntary US framework for managing AI risks across four functions: Govern, Map, Measure, Manage. |
| PII Sentinel | KoraSafe's guardian that detects personally identifiable information in AI inputs and outputs in real time. |
| RACI matrix | Responsibility Assignment Matrix. Defines who is Responsible, Accountable, Consulted, and Informed for each governance control. |
| RAG | Retrieval-Augmented Generation. The technique KoraSafe uses to ground its answers in KoraSafe's regulatory knowledge base rather than relying on model training alone. |
| Risk tier | The EU AI Act classification assigned to an AI system: Prohibited, High-Risk, Limited Risk, or Minimal Risk. |
| Six Pillars | KoraSafe's governance framework: Human Oversight, Logging and Audit, Bias Testing, Risk Management, Data Governance, and Transparency. |
| Trust score | A numeric metric assigned to agent interactions that reflects behavioral reliability. Used by the Autonomy Guard to restrict high-impact actions from low-trust agents. |
| Code audit | Automated static analysis of AI agent source code for governance violations. Findings map to regulatory controls across EU AI Act, GDPR, and HIPAA. |
| Policy pack | A versioned bundle of governance policies tied to a specific regulation. Packs use semver and support human review gates before enforcement. |
| Knowledge Graph | A structured map of regulations, articles, and controls that enables cross-regulation credit and unified compliance scoring. |
| Cross-regulation credit | When a single governance control satisfies overlapping requirements from multiple regulatory frameworks. |
| Shadow AI | Unauthorized or unregistered AI tool usage within an organization, detected by the browser extension. |
| SLA compliance | Tracking whether governance findings are acknowledged and resolved within the defined response windows per severity tier. |
| FinOps | The discipline of monitoring, allocating, and optimizing LLM API spend across an organization. |
| GRC connector | Integration module bridging KoraSafe with enterprise GRC systems like ServiceNow and OneTrust. |