KoraSafe / User Guide

AI Governance Intelligence
for the Agentic Era

This guide covers every module, workflow, and concept you need to govern your AI fleet with confidence.

EU AI Act
GDPR
US State Laws
ISO 42001
NIST AI RMF
SOC 2

KoraSafe is an AI governance intelligence platform that helps enterprise teams classify risk, register AI systems, enforce policies, and maintain continuous compliance across multi-framework regulatory environments. Six specialized Guardian Agents monitor your AI fleet around the clock. KoraSafe, the AI assistant, answers governance questions with cited regulatory sources.

Who This Guide Is For

Governance Leads

Set up your AI registry, run assessments, and generate audit-ready documentation for legal and regulators.

AI / ML Engineers

Register systems, configure enforcement policies, and connect KoraSafe to CI/CD pipelines via MCP API.

Risk and Compliance

Track maturity scoring, manage RACI matrices, export checklists, and monitor violation dashboards.

No prior compliance expertise required

KoraSafe's assessment engine and KoraSafe assistant guide you through regulatory requirements step by step. You describe your AI system; the platform handles the classification and gap analysis.

Getting started

Quick Start

Get from zero to a full governance posture in four steps. Most teams complete the initial setup in under an hour.

Setup flow
1
Invite team
Roles: Owner, Admin, Analyst, Viewer
2
Register AI
Add systems to the AI Registry
3
Assess risk
Run multi-framework assessment
4
Activate Guardians
Enable real-time monitoring
1

Create Your Organization and Invite Your Team

Navigate to Admin > Organization. Set your organization name, industry, and preferred jurisdictions. Then go to Admin > Users and send token-based invitations to colleagues. Assign roles based on their responsibilities: Owners control org settings, Admins manage systems, Analysts run assessments, and Viewers have read-only access.

2

Register Your AI Systems

Go to AI Registry and click Add System. Fill in the name, system type, deployment domain, underlying model, and assigned owner. Set the autonomy level: Observe, Advise, Supervised Action, or Full Autonomy. Each registered system becomes a governed asset tracked across all modules.

3

Run Your First Risk Assessment

Navigate to Risk Assessment and select a registered system. Work through the five-step guided workflow: agent type, industry, data categories, affected populations, and jurisdictions. The assessment engine classifies the system under EU AI Act, GDPR, and relevant US state laws within about a minute. Download the resulting Assessment Report, Technical Documentation, or Governance Roadmap.

4

Activate Guardian Agents

Go to Enforcement > Guardian Controls. Enable the guardians relevant to your systems. PII Sentinel and Hallucination Detector are recommended for all deployments. Set each guardian's autonomy level: Observe (report only), Propose (request human approval), or Act (execute within boundaries). Guardian activity streams into the Enforcement violation log automatically.

No-code quick assessment available at korasafe.ai

The homepage offers an instant risk assessment without signing in. Describe your AI system, answer the guided questions, and get a risk classification with regulatory citations in under 60 seconds. Use this for a fast first look before setting up your full organization account.

Foundation

Core Concepts

Key ideas that underpin how KoraSafe works. Understanding these makes every module easier to use.

Risk Tiers

Every AI system in KoraSafe is classified into one of four risk tiers under the EU AI Act. The tier determines the controls, documentation, and monitoring required.

Tier 1
Prohibited
Banned under EU AI Act Art. 5. Immediate remediation required before any deployment.
Tier 2
High Risk
Annex III systems. Full governance framework, conformity assessment, and risk management system required.
Tier 3
Limited Risk
Transparency obligations apply. Document governance, implement disclosure measures, monitor continuously.
Tier 4
Minimal Risk
No mandatory EU AI Act requirements. Maintain governance best practices and monitoring.

Autonomy Levels

Every registered AI system is assigned an autonomy level that defines the governance controls it requires and how Guardian Agents enforce boundaries.

Observe
Level 0: Read, log, and report

The system monitors and produces output but takes no external actions. Recommended for analytics and reporting tools. Lowest enforcement overhead.

Advise
Level 1: Recommend and draft

The system generates recommendations or drafts content for human review. A human approves before anything is executed. Standard for decision-support tools.

Supervised
Level 2: Act within pre-approved boundaries

The system executes within a defined scope you control. All actions are logged and auditable. Override is available at any time. Appropriate for workflow automation.

Autonomous
Level 3: Full execution authority

The system acts independently within its domain. Requires the strictest governance controls: continuous guardian monitoring, complete audit trail, and Human-in-the-Loop approval gates for out-of-scope actions.

The Six Pillars of AI Governance

KoraSafe measures governance maturity across six pillars. Each pillar has four to five controls that map to regulatory requirements and industry standards.

Human oversight
HITL mechanisms, override controls, escalation workflows
Logging and audit
Decision trails, timestamps, immutable audit logs
Bias testing
Demographic parity, 4/5 rule, disparate impact checks
Risk management
Risk registry, impact assessments, residual risk tracking
Data governance
Data lineage, PII handling, consent management, DPIAs
Transparency
Disclosure obligations, explainability, user notifications

Regulatory Frameworks

FrameworkJurisdictionScopeCoverage in KoraSafe
EU AI ActEuropean UnionAI systems placed on EU marketFull: risk tiers, Annex III, prohibited practices
GDPREU / EEAProcessing of personal dataFull: Art. 22, Art. 35 DPIA, lawful basis, DPAs
Colorado SB 205Colorado, USHigh-risk algorithmic systemsImpact assessment, bias auditing requirements
NYC Local Law 144New York CityAutomated employment decisionsBias audit requirements for hiring tools
CCPA / CPRACalifornia, USConsumer data rightsData rights workflows, opt-out mechanisms
NIST AI RMFUS (voluntary)AI risk managementMaturity framework alignment
ISO 42001InternationalAI management systemsGovernance maturity mapping
Module

Risk Assessment

Multi-framework risk classification. Evaluate AI systems across EU AI Act, GDPR, and US state laws with cited regulatory sources. Results in under 60 seconds.

Assessment workflow, five steps
1
Agent type
Select all applicable functions
2
Industry
Deployment domain
3
Data
All data types accessed
4
Populations
Affected and vulnerable groups
5
Jurisdictions
Where your system operates

Running an Assessment

Navigate to Risk Assessment and select a registered AI system (or start a new one from the landing page). The assessment works through five sections:

Step 1: Agent type

Select all functions that apply to your AI system. Most agents serve multiple functions, and each one can trigger different regulatory requirements. For example, a hiring-screening agent that also uses biometric recognition triggers both Employment (Annex III, 4a) and Biometric Categorization (Annex III, 1a) high-risk categories under the EU AI Act simultaneously.

Step 2: Industry

Select the industry your agent serves, not the industry your company operates in. A fraud-detection agent for a bank operates in Financial Services even if the vendor builds fintech software.

Step 3: Data categories

Select every data type the system accesses, processes, or generates. Health data triggers GDPR special categories (Art. 9). Biometric data can trigger EU AI Act prohibitions. Select all that apply; the engine resolves overlapping requirements automatically.

Be comprehensive on data categories

Underreporting data types is the most common cause of under-classification. If your system even incidentally processes a data type, select it. The engine will note it in context rather than misclassifying the system as lower risk.

Step 4: Affected Populations

Include everyone your system makes decisions about, interacts with, or communicates to. Vulnerable groups (minors, patients, people with disabilities) trigger stricter requirements under both EU AI Act and GDPR. Select all groups that may be reached, not just the primary intended users.

Step 5: Jurisdictions and Existing Controls

Select where your system operates or serves users. Then work through the governance posture section: for each practice (e.g., "Human review before consequential decisions," "Bias testing in production"), select your current status. Honest answers produce better recommendations.

Assessment Output

Follow-Up Q&A

After the assessment completes, a chat interface appears at the bottom of the results page. Ask KoraSafe follow-up questions grounded in your specific assessment context, for example, "What does Art. 9(2)(g) mean for my health-data system?" or "Which controls address the DPIA requirement?" Answers are cited against the regulatory knowledge base.

Report typeContentsTypical use
Assessment Report Risk classification, findings, regulatory citations, gap summary Legal team, DPO, board
Technical Documentation System description, data flows, control inventory, risk register Conformity assessment, regulatory filing
Governance Roadmap Prioritized remediation steps, timelines, responsible owners Engineering and compliance sprint planning
Module

AI Registry

The single source of truth for every AI system in your organization. Catalog, classify, and track lifecycle status across your entire AI fleet.

Adding a System

Click Add System in the AI Registry. The minimum required fields are:

  • Name, a clear, internal identifier (e.g., "Customer Churn Predictor v2")
  • System type, classification, prediction, generative, agentic, or hybrid
  • Domain, the business area or industry the system serves
  • Underlying model, the model family or vendor (e.g., GPT-4o, Llama 3, internal model)
  • Owner, the team member accountable for this system's governance
  • Autonomy level, Observe, Advise, Supervised, or Autonomous
  • Lifecycle status, Development, Staging, Production, Deprecated

Fleet View

The fleet view shows all registered systems in a sortable, filterable table. Filter by risk classification, autonomy level, lifecycle status, or owner. Click any system to open its detail view.

System Detail Tabs

TabContents
OverviewRisk score, autonomy classification, compliance readiness percentage, last assessment date
GovernanceSix-pillar status heatmap, maturity scores, dimension-by-dimension breakdown
EnforcementActive guardrails, recent violations, Guardian Agent trigger counts
HistoryTimestamped changelog: assessments run, policy changes, ownership transfers, status changes
Deprecate, don't delete

When an AI system is retired, set its lifecycle status to Deprecated rather than deleting it. This preserves the audit trail and governance history for regulatory purposes. Regulators may request historical records for systems no longer in use.

Module

Governance

Track governance maturity across Six Pillars, score your program against industry benchmarks, and manage the Eval-Driven Development pipeline.

Governance Heatmap

The heatmap displays a matrix of all registered AI systems (rows) against the governance dimensions (columns). Color coding shows status at a glance:

Not Implemented

No control exists for this dimension. Indicates a gap that needs remediation.

In Progress

Control is partially implemented. Review what remains to reach compliance.

Implemented

Control is in place and documented. Eligible to be cited in audit evidence.

Maturity Radar

The maturity radar scores your organization across seven governance dimensions against a five-level scale:

LevelNameDescription
1InitialAd-hoc processes, no formal governance documentation
2DevelopingSome documented processes, inconsistent application
3DefinedStandardized processes documented and followed consistently
4ManagedProcesses measured, monitored, and continuously improved
5OptimizedProactive governance, benchmarked against industry standards

Agent Evals (EDD Pipeline)

The Eval-Driven Development (EDD) pipeline applies a quality gate to AI systems before they reach production. Four stages:

D
Define
Set eval criteria and benchmarks
V
Develop
Build and iterate on the system
G
Gate
Pass/fail against six dimensions
M
Monitor
Continuous scoring in production

Evals are scored across six weighted dimensions: Accuracy, Bias, Hallucination Rate, Safety, Compliance, and Latency. A composite score must exceed the configured threshold before the Gate stage passes.

Connect evals to CI/CD via MCP API

The EDD Gate stage can be wired into your CI/CD pipeline. A failed gate blocks deployment automatically. See the MCP API section for integration details.

Module

Enforcement

Policy engine, violation management, and Guardian Agent controls. Define what your AI systems can and cannot do, then enforce it automatically.

Policy Types

KoraSafe supports seven enforcement policy types. Policies apply to individual systems or system groups:

Policy typeWhat it doesTypical use
Input filter Blocks or transforms input before it reaches the AI system Strip PII from prompts, block prompt-injection patterns
Output filter Blocks or transforms AI output before it reaches users or downstream systems Redact SSNs, block hallucinated citations, remove toxic content
Approval workflow Routes consequential decisions to a human approver before execution High-stakes hiring decisions, loan denials, medical recommendations
Circuit breaker Halts system operation when a threshold is breached. Acts as an emergency kill switch for individual agents or the entire fleet. Budget cap reached, error rate spike, anomalous output volume
Pre-deployment gate Blocks deployment until governance conditions are met Require passing eval score, completed DPIA, or risk assessment before go-live
Rate limiting Limits request volume or output frequency per time window Prevent runaway agents, control API spend, enforce fair-use policies
Trust scoring Assigns trust scores to agent interactions, restricts low-trust actions. Enables progressive autonomy through tier graduation as agents demonstrate compliant behavior. Restrict autonomous agents from high-impact actions until trust is established

Violation Management

When a guardian agent or policy rule detects a violation, it appears in the Violations tab with severity, timestamp, affected system, and the specific rule triggered. Filter by severity or status. Resolve violations through the admin workflow, each resolution is timestamped and logged to the immutable audit trail.

Critical

Immediate action required. Probable regulatory breach or prohibited practice detected. May trigger circuit breaker automatically.

High

Significant governance gap. Address within 24 to 48 hours to maintain compliance posture.

Medium

Governance gap that should be remediated in the current sprint or cycle.

Low

Minor deviation. Log for awareness and address in next review cycle.

Guardian Controls Panel

Navigate to Enforcement > Guardian Controls to activate, pause, or configure individual Guardian Agents. Each guardian shows its trigger count, last-active timestamp, and current autonomy level. You can adjust each guardian's autonomy level independently, a guardian set to Observe reports findings without taking action, while one set to Act can execute remediation automatically.

Module

Guardian Agents

Six specialized AI agents that monitor your fleet around the clock. They detect violations and enforce policies in real time.

PII Sentinel
Real-time PII detection and redaction across all LLM input and output streams. Catches names, SSNs, credit cards, health IDs, email addresses, and phone numbers before data leaves the system boundary.
Bias Watchdog
Discrimination and fairness monitoring for screening, ranking, and decision-making agents. Applies the 4/5 rule and demographic parity analysis to flag disparate impact across protected categories: race, gender, age, disability, religion.
Hallucination Detector
Source grounding validation for conversational and advisory agents. Catches fabricated citations, ungrounded claims, and outputs with no knowledge-base support using LLM-as-Judge verification against the regulatory document corpus.
Cost Controller
Budget thresholds and automatic circuit breakers across all API-consuming agents. Detects spend anomalies, enforces rate limits, and triggers automated throttling at 70%, 85%, and 95% of budget cap.
Autonomy Guard
Autonomy level enforcement for supervised and fully autonomous agents. Blocks out-of-scope actions, detects unauthorized escalation attempts, and forces approval workflows when agents try to exceed their configured boundaries.
Compliance Auditor
Multi-framework compliance verification with RAG-powered regulatory context. Monitors all registered agents for compliance drift, policy adherence gaps, and documentation completeness against EU AI Act, GDPR, and US state laws.

How Guardians Work

Each guardian operates as a specialist agent that evaluates inputs, outputs, or system behavior against its specialized detection rules. Results are returned as structured findings with severity, evidence, and recommended remediation. At autonomy level Observe, findings are logged only. At Propose, the guardian creates a pending resolution that a human approves. At Act, the guardian executes remediation directly and logs the action.

Start with PII Sentinel and Hallucination Detector

For most teams, the two highest-value guardians to activate first are PII Sentinel (catches data leakage immediately) and Hallucination Detector (prevents fabricated regulatory citations from reaching users). Set both to Observe initially to understand your baseline before enabling enforcement actions.

PII Types Detected

PII typeSeverityExamples
Social Security NumberCriticalXXX-XX-XXXX patterns
Credit Card NumberCritical16-digit card numbers with Luhn validation
Passport NumberCriticalCountry-specific passport formats
Email AddressHighuser@domain.com patterns
Phone NumberHighUS and international formats
Physical AddressHighStreet address with city/state/ZIP
Medical Record NumberHighMRN patterns in healthcare context
Full NameMediumFirst + last name combinations
Date of BirthMediumDate strings in PII context
IP AddressLowIPv4 and IPv6 patterns
Module

KoraSafe AI Assistant

Your always-on governance teammate. Ask about regulatory requirements, policy gaps, or compliance status in plain language. Every answer is cited against the regulatory knowledge base.

What KoraSafe can answer

  • "Does my healthcare scheduling agent fall under EU AI Act Annex III?"
  • "What does Art. 22 GDPR require for automated decision-making?"
  • "Which of my registered systems are missing a DPIA?"
  • "What's the difference between Colorado SB 205 and NYC Local Law 144?"
  • "How do I remediate the Human Oversight gap flagged in my last assessment?"

Knowledge Base

KoraSafe's answers are grounded in KoraSafe's regulatory knowledge base, a continuously updated corpus covering:

Citing Sources

Every KoraSafe response includes source citations, the specific document, article, or section the answer draws from. Citations appear as footnotes in the answer. Click a citation to view the full source text. If KoraSafe cannot find regulatory support for a claim, it says so rather than speculating.

Assessment Follow-Up Mode

After completing a risk assessment, the KoraSafe interface enters assessment context mode. Questions asked here are answered with awareness of your specific system's profile, risk tier, and governance posture. For example, asking "What do I need to fix first?" yields a prioritized, system-specific remediation list rather than generic guidance.

Data isolation

KoraSafe operates within your organization's data boundary. Conversations and context are not shared across organizations. KoraSafe cannot access other tenants' systems, assessments, or governance data.

Module

Checklist and RACI

Track completion status across all governance pillars. Define accountability with an editable RACI matrix. Export everything as CSV or PDF for auditors.

Compliance Checklist

The checklist organizes 24 core governance controls across the Six Pillars. Each control has a status: Done, In Progress, or Not Started. Progress bars at the top of each pillar show overall completion.

Controls are mapped to regulatory requirements, so checking one off automatically updates the compliance posture displayed in the Governance heatmap and assessment results.

RACI Matrix

The RACI matrix defines accountability for every governance control. The four roles:

RoleMeaningTypical assignment
R, ResponsibleDoes the workAI Engineer, Data Scientist
A, AccountableFinal decision-maker, signs offProduct Owner, Head of AI
C, ConsultedProvides input and expertiseLegal, DPO, Security
I, InformedKept up to date on progressBoard, Compliance Committee

Edit any cell to assign a team member to a role. Changes are saved automatically and logged to the audit trail. Export the completed RACI as PDF for regulatory submissions or board reporting.

Exporting for Audits

Both the checklist and RACI matrix export to CSV and PDF. The PDF export includes:

  • Organization name and export timestamp
  • Overall completion percentage per pillar
  • Control status with last-updated dates
  • Assigned responsible and accountable owners
Build your audit package incrementally

Export the checklist and RACI alongside the Assessment Report and Technical Documentation. Regulators and auditors typically ask for these four artifacts as a baseline package for AI governance reviews.

Module

Integrations

Connect KoraSafe to the tools your team already uses. Push governance alerts to Slack, create tickets in Jira, gate CI/CD pipelines, and ingest regulatory documents.

Connected Services

Document Ingestion

Import regulatory documents into the knowledge base from Integrations > Document Ingestion. Provide the document title, full text or URL, category (law, guidance, enforcement action, framework), and jurisdiction. KoraSafe AI-indexes the document and makes it available to KoraSafe for citations within a few minutes.

Setting Up Slack

1

Create a Slack Webhook

In your Slack workspace, go to Apps > Incoming Webhooks and create a new webhook URL for your #ai-governance channel.

2

Paste the Webhook URL Into KoraSafe

In Integrations > Connected Services, click Add Slack and paste the webhook URL.

3

Select Notification Types

Choose which events trigger Slack messages: Critical violations only, all violations, weekly summaries, or budget alerts.

Module

Admin and Settings

Organization management, user access, SSO, API keys, and security configuration.

Roles and Permissions

RoleCapabilities
Owner Full access including billing, org deletion, SSO configuration, and role assignment
Admin Add/remove users, manage AI systems, configure policies, create API keys
Analyst Run assessments, view all systems, export reports, resolve violations
Viewer Read-only access to all governance data. Cannot modify systems or run assessments

Inviting Users

Navigate to Admin > Users > Invite. Enter the email address and select a role. KoraSafe sends a token-based invitation link valid for 48 hours. The invitee clicks the link, completes account setup, and is added to your organization automatically.

SSO and MFA

KoraSafe supports SAML 2.0 and OIDC for Single Sign-On. Configure your identity provider (Okta, Azure AD, Google Workspace) in Admin > Security > SSO. Provide your IdP metadata URL or upload the XML metadata file. Once SSO is configured, you can enforce it organization-wide, new users must authenticate through your IdP.

Multi-factor authentication can be enforced for all users or specific roles from Admin > Security > MFA. TOTP (authenticator app) and SMS are supported.

API Key Management

Create scoped API keys from Admin > API Keys. Each key can be scoped to specific endpoints (e.g., read-only access to assessments, or write access to the AI Registry). Rotate or revoke keys at any time. All key usage is logged with timestamps, IP addresses, and endpoint calls to the audit trail.

Feature Flags

Owners can enable or disable platform features per organization from Admin > Feature Flags. This allows gradual rollout of new modules to your team, or disabling features not yet ready for your governance workflow.

Danger zone

The Danger Zone section in Admin > Organization contains irreversible actions: exporting all data, transferring organization ownership, and deleting the organization. Deletion is permanent, export your audit data first.

Module

MCP API

Model Context Protocol endpoint for agent-to-agent governance. Query the knowledge base, run assessments, and access compliance data programmatically.

What is MCP?

The Model Context Protocol (MCP) is an open standard for connecting AI agents to external data sources and tools. KoraSafe's MCP endpoint lets AI orchestrators query compliance data, run assessments, and enforce governance policies without human intervention, enabling fully automated governance pipelines.

Agent Card

KoraSafe publishes an A2A (Agent-to-Agent) agent card at /.well-known/agent.json. Other AI agents use this card to discover KoraSafe's capabilities and establish governed connections.

Available MCP Endpoints

EndpointMethodDescription
/api/assessPOSTRun a full multi-framework risk assessment programmatically
/api/chatPOSTQuery KoraSafe with a natural language governance question
/api/queryPOSTSemantic search over the regulatory knowledge base
/api/documentsGET/POSTList and ingest regulatory documents
/api/ingestPOSTIngest a new regulatory document into the knowledge base
/api/healthGETPlatform health check for uptime monitoring
/api/guardian-scanPOSTRun a guardian agent scan against a text payload
/api/ai-systemsGET/POSTRead and write AI Registry entries

Authentication

All MCP API calls require a bearer token. Create a scoped API key in Admin > API Keys and include it in the Authorization header:

Authorization: Bearer ks_live_xxxxxxxxxxxxxxxxxxxx

CI/CD Integration Example

To gate a deployment on governance status, add a step to your pipeline that calls /api/assess and fails the build if the compliance score is below your threshold:

# In your CI pipeline (GitHub Actions / GitLab CI)
POST /api/assess
{
  "systemId": "sys_abc123",
  "failThreshold": 70
}

# Returns 200 if score >= threshold, 422 if below

MCP Dashboard

Monitor MCP server activity, request volume, error rates, and latency from the MCP Dashboard at /dashboard. Use this to verify integrations are working and debug failed requests.

13

Code audit

Scan agent source code for governance violations across every surface.

Running a code audit

There are four ways to run a code audit:

  • Web platform — Navigate to Insights and Operations > Code Audit. Drag-drop a source file or paste a GitHub URL. The Code Auditor agent scans the code and maps findings to regulatory controls.
  • KoraSafe agent bar — Type /code-audit followed by a description (e.g., /code-audit the hiring agent). Findings render as structured cards with Apply fix, Reject, and Escalate actions.
  • GitHub Action — Add korasafe/kora-action to your workflow. The action runs on every pull request, posts findings as PR comments, and creates a Check Run that blocks merges when critical violations are found.
  • VS Code extension — Diagnostics appear on file save. Open the KoraSafe sidebar for a compliance score ring and severity-grouped findings.

Audit findings dashboard

Navigate to Insights and Operations > Audit Findings to see all findings across surfaces in a unified view. Filter by severity (critical, high, medium, low), category (PII, HITL, secrets, data handling), status (open, acknowledged, resolved), and source (web, CI/CD, IDE, browser).

Select multiple findings and use bulk status update to acknowledge or resolve them in a single action.

Slash commands for audit

  • /code-audit — Run the Code Auditor agent on an agent or code snippet
  • /deps — Run the Dependency Auditor to scan packages for CVEs and license issues
  • /fix — Generate remediation patches for a specific finding
14

Policy packs

Versioned governance bundles tied to regulations.

Browsing the catalog

Navigate to Insights and Operations > Policy Packs. The catalog shows available packs organized by regulation (EU AI Act, GDPR, HIPAA, US state laws). Each pack displays its current version, last updated date, and subscriber count.

Subscribing and pinning

Click Subscribe on a pack to begin enforcing its policies across your surfaces. By default, subscriptions auto-update when a new pack version is published. To lock to a specific version, toggle Pin version — this ensures your policies remain stable during critical periods.

Human review gate

When a regulation is amended and a pack version is bumped, the update is held for human review before enforcement begins. An admin must approve the new version in the review queue. This prevents untested regulatory changes from reaching production.

Surface distribution

Each policy can be toggled per surface: web platform, CI/CD, IDE, and browser. This lets you enforce a policy in CI/CD (blocking deployments) while leaving it in advisory mode in the IDE (showing diagnostics without blocking saves).

15

Extensions

KoraSafe governance runs inside your IDE, browser, and CI/CD pipeline.

VS Code extension

Install the .vsix package from Extensions > Install from VSIX in VS Code. Once installed:

  • Diagnostics on save — Squiggly underlines appear on governance violations (PII, missing HITL gates, hardcoded secrets) every time you save a file.
  • Sidebar — The KoraSafe sidebar (shield icon) shows a compliance score ring and findings grouped by severity.
  • Quick fixes — Click the lightbulb on a diagnostic to apply a one-click fix.
  • Hover tooltips — Hover over a finding to see which regulation and article it maps to.
  • CommandsKoraSafe: Scan File, KoraSafe: Scan Workspace, KoraSafe: Set API Key from the command palette.

Chrome browser extension

Load the extension unpacked from extensions/chrome/ in chrome://extensions. The extension monitors AI chat interfaces and LLM API calls:

  • Network interception — Detects API calls to OpenAI, Anthropic, Azure OpenAI, Google AI, Cohere, and Bedrock endpoints.
  • Shadow AI detection — Identifies when team members use ChatGPT, Gemini, Copilot, or other AI tools on unauthorized pages.
  • PII scanning — Scans text entered into AI chat inputs for SSNs, email addresses, phone numbers, and other PII before it leaves the browser.
  • Side panel — Click the extension icon to open the side panel with Summary, Findings, and Timeline tabs.

GitHub Action

Add the KoraSafe governance action to any repository:

name: KoraSafe Audit
on: [pull_request]
jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: korasafe/kora-action@main
        with:
          api-key: ${{ secrets.KORASAFE_API_KEY }}
          fail-on: critical

The action collects changed files in the PR, submits them to the Code Auditor and Dependency Auditor agents, and posts a findings summary as a PR comment. Set fail-on to critical, high, or medium to control the merge-blocking threshold.

GitLab CI

Include the reusable template in your .gitlab-ci.yml:

include:
  - remote: 'https://korasafe.ai/ci/.korasafe-ci.yml'

JS and Python SDKs

For programmatic access, use the official SDKs:

// JavaScript
import { KoraSafe } from '@korasafe/sdk';
const kora = new KoraSafe({ apiKey: 'ks_...' });
const findings = await kora.audit.findings({ severity: 'critical' });

# Python
from korasafe import KoraSafe
kora = KoraSafe(api_key="ks_...")
findings = kora.audit.findings(severity="critical")
16

FinOps

Monitor, allocate, and optimize LLM spend across your organization.

Cost center management

Navigate to Insights and Operations > FinOps. Create cost centers by team, project, or use case. Allocate monthly budgets and track actual spend against each center in real time.

Budget alerts

Set threshold-based alerts (e.g., "alert when 80% of budget consumed"). Alerts route through the severity-based notification system — critical budget breaches trigger Slack and email notifications.

Cost-per-action tracking

Break down LLM spend by action type: KoraSafe queries, risk assessments, guardian scans, code audits, and document ingestion. Identify which workflows consume the most tokens and optimize accordingly.

Reports

  • Usage forecast — Project future spend based on current trends to plan capacity.
  • Chargeback report — Allocate AI costs to business units for internal billing.
  • Value report — Quantify governance ROI: compliance gaps closed, findings remediated, audit hours saved.
17

System health

Real-time platform monitoring and diagnostics.

Navigate to Insights and Operations > System Health. The dashboard shows:

  • Service probes — Every 5 minutes, health probes check the database, auth provider, LLM endpoints, and integrations. Results display as green (operational), amber (degraded), or red (down).
  • Error log — Searchable log with structured error codes, request IDs, and timestamps. Stack traces and internal paths are never exposed to API clients.
  • Endpoint health — Per-endpoint latency and error rate metrics. Spot degraded routes before they impact users.
  • Database health — Connection pool utilization, query performance, and table size monitoring.
  • SLA report — Track availability, response times, and governance finding resolution windows against your SLA targets.
18

Alerts and notifications

Severity-based routing with SLA tracking across Slack, email, and in-app channels.

How alert routing works

When a governance finding is detected, KoraSafe routes the alert based on severity:

SeverityChannelsResponse SLAExample
CriticalSlack DM + email + platform banner1 hourPII in production response, CVE with active exploit
HighSlack channel + email digest24 hoursMissing HITL gate, high-risk EU AI Act classification
MediumPlatform notification + weekly digest1 weekOutdated dependency, missing error handling
LowPlatform notification onlyNext sprintCode style violation, missing model card field

Critical alert banner

When critical or high-severity alerts are active, a persistent banner appears at the top of the platform. The banner polls every 60 seconds and links directly to the audit findings dashboard. Banners can be dismissed per session but reappear until the underlying alert is resolved.

Notification preferences

Navigate to your profile or Administration > Notification Preferences. Toggle delivery channels (in-app, email, Slack) independently for each category:

  • Governance — Risk assessments, compliance gaps, policy violations, governance score changes
  • Agents — Pending approvals, trust tier changes, circuit breakers, error spikes
  • Security — Failed logins, new device access, role changes, API key expiry
  • Usage — Approaching limits, budget thresholds, overages, plan renewals
  • System — Service degradation, error rate spikes, SLA breaches
  • Team — New members, invitations accepted, role changes

Preferences are saved per user and per organization.

Alert rules engine

Admins can define custom alert rules with a metric, operator, threshold, cooldown period, and channel routing. Rules are evaluated every 5 minutes by the cron engine. Stale alerts (open for more than 24 hours with no recurrence) are auto-resolved.

SLA compliance

Every alert tracks first_detected_at, acknowledged_at, and resolved_at. The SLA compliance endpoint (/api/alerts/sla) returns breach rates per severity tier, helping teams identify bottlenecks in their remediation workflows.

Reference

Glossary

Key terms used throughout KoraSafe and in AI governance regulation.

TermDefinition
A2A ProtocolAgent-to-Agent communication standard. Allows AI agents to discover and interact with KoraSafe's governance capabilities programmatically.
Annex IIIThe EU AI Act annex listing eight categories of high-risk AI systems, including employment tools, biometric systems, and critical infrastructure.
Autonomy levelA four-tier classification (Observe, Advise, Supervised, Autonomous) that defines an AI system's decision-making independence and the governance controls required.
Circuit breakerAn enforcement policy that halts system operation when a threshold is breached, for example, stopping an agent when its budget cap is reached.
Compliance scoreA 0 to 100 readiness score derived from a system's governance posture, risk tier, and active controls. Used to track progress and gate deployments.
DPIAData Protection Impact Assessment. Required under GDPR Art. 35 for processing that is "likely to result in a high risk" to individuals, including most AI decision-making systems.
EDD pipelineEval-Driven Development. A four-stage quality gate (Define, Develop, Gate, Monitor) that applies governance criteria before and after AI deployment.
EU AI ActRegulation (EU) 2024/1689 on artificial intelligence. The world's first comprehensive AI regulation, creating risk-based obligations for AI systems placed on the EU market.
Governance heatmapA visual matrix showing every AI system's status across every governance dimension. Red = not implemented, yellow = in progress, green = done.
Guardian agentA specialized AI agent that continuously monitors your fleet for specific violation types: PII, bias, hallucination, cost overruns, autonomy violations, or compliance drift.
HITLHuman-in-the-Loop. A governance pattern requiring human review and approval for AI actions above a defined consequence threshold.
MCPModel Context Protocol. An open standard for connecting AI agents to external data and tools. KoraSafe's MCP endpoint enables programmatic governance.
Maturity radarA spider-chart visualization scoring governance maturity across seven dimensions on a five-level scale from Initial to Optimized.
NIST AI RMFNational Institute of Standards and Technology AI Risk Management Framework. A voluntary US framework for managing AI risks across four functions: Govern, Map, Measure, Manage.
PII SentinelKoraSafe's guardian that detects personally identifiable information in AI inputs and outputs in real time.
RACI matrixResponsibility Assignment Matrix. Defines who is Responsible, Accountable, Consulted, and Informed for each governance control.
RAGRetrieval-Augmented Generation. The technique KoraSafe uses to ground its answers in KoraSafe's regulatory knowledge base rather than relying on model training alone.
Risk tierThe EU AI Act classification assigned to an AI system: Prohibited, High-Risk, Limited Risk, or Minimal Risk.
Six PillarsKoraSafe's governance framework: Human Oversight, Logging and Audit, Bias Testing, Risk Management, Data Governance, and Transparency.
Trust scoreA numeric metric assigned to agent interactions that reflects behavioral reliability. Used by the Autonomy Guard to restrict high-impact actions from low-trust agents.
Code auditAutomated static analysis of AI agent source code for governance violations. Findings map to regulatory controls across EU AI Act, GDPR, and HIPAA.
Policy packA versioned bundle of governance policies tied to a specific regulation. Packs use semver and support human review gates before enforcement.
Knowledge GraphA structured map of regulations, articles, and controls that enables cross-regulation credit and unified compliance scoring.
Cross-regulation creditWhen a single governance control satisfies overlapping requirements from multiple regulatory frameworks.
Shadow AIUnauthorized or unregistered AI tool usage within an organization, detected by the browser extension.
SLA complianceTracking whether governance findings are acknowledged and resolved within the defined response windows per severity tier.
FinOpsThe discipline of monitoring, allocating, and optimizing LLM API spend across an organization.
GRC connectorIntegration module bridging KoraSafe with enterprise GRC systems like ServiceNow and OneTrust.