Find answers, learn the platform, and get the most out of KoraSafe's AI governance tools.
Create your account and configure your workspace in under two minutes.
Get started →Classify an AI system's risk level with our guided assessment flow.
Learn how →Add AI systems to your registry with autonomy levels and metadata.
Explore registry →Configure maturity assessments, RACI ownership, and compliance workflows.
Set up →The Kora Risk Assessment agent powers a multi-step analysis that classifies each AI system according to regulatory frameworks like the EU AI Act, NIST AI RMF, and ISO 42001. It synthesizes your answers, cross-references regulatory requirements, and delivers a cited risk classification in seconds.
How the assessment works: You answer a series of context-aware questions about your AI system's purpose, data inputs, autonomy level, deployment context, and affected population. The Kora Risk Assessment agent then runs a multi-step reasoning pipeline to produce a classification. The assessment typically takes 60 seconds or less to complete.
What questions are asked: Questions cover the system's use case domain (e.g., hiring, credit scoring, healthcare), whether it interacts with vulnerable populations, the type and sensitivity of data processed, the level of human oversight, and the deployment environment (internal vs. customer-facing).
Risk classification: Based on your answers, the system is classified into one of four risk tiers:
Reports: After classification, KoraSafe generates a detailed compliance report including the risk tier, rationale, applicable regulations, required actions, and a remediation roadmap. Reports can be exported as PDF for auditors and board review.
The AI Registry is your single source of truth for every AI system in your organization. It provides a centralized catalog with lifecycle tracking, risk metadata, and ownership assignment.
How to register AI assets: Click "Add AI System" from the Registry dashboard. Enter the system name, description, owning team, vendor (if third-party), deployment status, and data sources. You can also import systems in bulk via CSV or the MCP API.
Autonomy levels explained: Each registered system is tagged with an autonomy level that reflects how independently it operates:
Fleet view: The fleet view provides a bird's-eye visualization of all registered AI systems, filterable by risk tier, autonomy level, department, status, and regulatory framework. Color-coded cards make it easy to spot systems that need attention.
Detail tabs: Each system's detail page has tabs for Overview (metadata and status), Risk Assessment (classification results), Governance (ownership RACI, maturity scores), Policies (active enforcement rules), Activity (audit trail of changes), and Compliance (checklist status and report history).
KoraSafe's Governance module gives you full visibility into your organization's AI governance maturity across six core pillars: Fairness, Transparency, Safety, Privacy, Accountability, and Robustness.
Maturity radar: A spider/radar chart that visualizes your governance maturity scores across all six pillars. Each pillar is scored from 0 to 5 based on the controls, processes, and documentation you have in place. The radar helps identify which areas are strong and where investment is needed.
Heatmap: The governance heatmap displays a cross-reference of AI systems against governance pillars, color-coded by maturity. Red cells indicate critical gaps requiring immediate action, yellow indicates partial compliance, and green represents full maturity. This makes it easy to spot systemic weaknesses across your fleet.
RACI matrix: Define who is Responsible, Accountable, Consulted, and Informed for each governance activity. KoraSafe generates a configurable RACI matrix for every registered AI system, ensuring clear ownership of risk management, monitoring, incident response, and compliance reporting.
Compliance checklists: Pre-built and customizable checklists aligned to the EU AI Act, NIST AI RMF, ISO 42001, and other frameworks. Each checklist item can be assigned to an owner, given a due date, and tracked to completion. Status is automatically reflected in your governance maturity scores.
Agent evals: For agentic AI systems, KoraSafe provides Enhanced Due Diligence (EDD) evaluations with multi-dimensional weighted scoring across autonomy, data sensitivity, decision impact, reversibility, and human oversight. Evals generate quantified risk profiles that drive enforcement policy recommendations.
The Enforcement module turns governance policies into active controls powered by Kora's Guardian agents. Kora PII Sentinel, Kora Bias Watchdog, Kora Hallucination Detector, Kora Cost Monitor, and Kora Autonomy Governor each run continuously alongside your AI systems in real time.
Policy types:
Violation management: Every policy violation is logged with full context: timestamp, system, policy triggered, input/output data, severity, and resolution status. The violations dashboard supports filtering, bulk actions, and trend analysis. Repeated violations automatically increase a system's risk score.
Guardian Agents: Specialized Kora agents that continuously monitor your AI fleet for specific risk categories. KoraSafe ships with five built-in Guardians: Kora PII Sentinel (detects and prevents personal data exposure), Kora Bias Watchdog (monitors for discriminatory patterns), Kora Hallucination Detector (catches factual inaccuracies), Kora Cost Monitor (prevents runaway API spending), and Kora Autonomy Governor (enforces human oversight boundaries). Each Guardian can be configured with custom thresholds and response actions.
Kora routes your questions to the right specialist agent and brings back one clear answer.
How to use Kora: Click the Kora icon in the bottom-right corner of any page, or press Ctrl+K (Cmd+K on Mac) to open the assistant. Type your question in plain English and Kora will respond with relevant guidance, links to documentation, and actionable next steps.
What Kora can answer:
Agent network scope: Kora's agents draw on the full text of major AI regulations (EU AI Act, NIST AI RMF, ISO 42001, OECD AI Principles), KoraSafe's product documentation, your organization's registered AI systems and governance data, and industry best practices for responsible AI deployment. Kora's agents do not have access to your production AI system data or end-user interactions.
KoraSafe is built for enterprise security from the ground up, with fine-grained access controls and complete audit trails.
SSO/MFA setup: KoraSafe supports SAML 2.0 and OIDC-based Single Sign-On with providers like Okta, Azure AD, Google Workspace, and OneLogin. Navigate to Settings > Authentication to configure your identity provider. Multi-factor authentication can be enforced organization-wide or per role, supporting authenticator apps, SMS, and hardware security keys.
User management: Invite users by email or sync from your identity provider. Assign roles (Admin, Governance Lead, Analyst, Viewer) that map to granular permissions. Role-based access control (RBAC) ensures users only see systems and data relevant to their department and responsibilities.
API keys: Generate scoped API keys from Settings > API to integrate KoraSafe with your CI/CD pipelines, internal tools, or agent frameworks. Each key can be restricted by permission scope (read, write, admin), IP allowlist, and expiration date. All API activity is logged.
Audit logs: Every action in KoraSafe is recorded in an immutable audit log: user logins, configuration changes, assessment completions, policy modifications, and data exports. Logs are searchable by user, action type, resource, and date range. They can be exported to your SIEM or compliance archive.
Multi-tenant isolation: Each organization's data is logically isolated with dedicated encryption keys. Tenant boundaries are enforced at the database, API, and application layers. Cross-tenant data access is architecturally impossible. KoraSafe supports sub-tenants for enterprise customers with multiple business units that need independent governance workflows while maintaining a unified executive view.
KoraSafe governs your AI fleet. Risk assessment, policy enforcement, regulatory tracking, all in one place.
The risk assessment asks a series of targeted questions about your AI system's purpose, data inputs, deployment context, affected populations, and autonomy level. Based on your answers, it applies regulatory mapping logic aligned with the EU AI Act, NIST AI RMF, and other frameworks to classify the system into a risk tier (Prohibited, High, Limited, or Minimal). The entire process takes about 60 seconds and produces a downloadable compliance report with specific recommended actions and a remediation timeline.
KoraSafe currently supports compliance mapping for the EU AI Act, NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001, OECD AI Principles, the White House Executive Order on AI, and sector-specific regulations including those from the FCA, OCC, FDA, and CMS. The Regulatory Intelligence module continuously monitors global AI regulation developments and updates compliance checklists as new requirements emerge. You can also create custom compliance frameworks for internal policies.
Guardian Agents are specialized AI monitors that run continuously alongside your AI systems. Each Guardian focuses on a specific risk category: the PII Guardian detects personal data in inputs and outputs, the Bias Guardian monitors for discriminatory patterns across demographic groups, the Hallucination Guardian validates factual accuracy against trusted knowledge bases, and the Cost Guardian tracks API usage and spending against budgets. When a Guardian detects a violation, it can log it, alert the responsible team, flag the output for review, or block the response entirely, depending on your configuration. Guardians learn from your feedback to reduce false positives over time.
Yes. KoraSafe enforces enterprise-grade security at every layer. All data is encrypted at rest (AES-256) and in transit (TLS 1.3). Each organization is logically isolated with dedicated encryption keys, and cross-tenant access is architecturally impossible. KoraSafe supports SSO via SAML 2.0/OIDC, enforces MFA, and provides role-based access control with granular permissions. Every action is logged in an immutable audit trail. KoraSafe does not use your data to train AI models, and you can configure data residency to meet GDPR and other regional requirements.
The MCP (Model Context Protocol) API enables agent-to-agent governance. It allows your AI agents to programmatically check policies, request approvals, report actions, and receive guardrails from KoraSafe in real time. For example, an autonomous agent can call the MCP API before executing a high-stakes action to verify it complies with your governance policies. If the action is flagged, the agent receives instructions to escalate to a human or modify its approach. The MCP API is RESTful, supports webhooks for event-driven workflows, and includes SDKs for Python, TypeScript, and Go.
Compliance reports can be exported from several locations in the platform. After completing a risk assessment, click "Export Report" to download a PDF. From the Governance dashboard, use the "Export" button to generate a full governance maturity report across all registered systems. Individual system detail pages also have export options for system-specific compliance documentation. All reports are formatted for board-level presentation and regulatory submission, including executive summaries, risk classifications, control mappings, and action items with owners and deadlines.
KoraSafe classifies AI agents on a five-level autonomy scale that determines the governance controls required. Level 0 (Tool) systems are fully human-controlled. Level 1 (Assistant) systems provide recommendations but humans decide. Level 2 (Collaborator) systems make decisions with human approval gates. Level 3 (Delegated) systems operate autonomously within defined guardrails, escalating exceptions. Level 4 (Autonomous) systems operate fully independently with real-time monitoring and circuit breakers. Higher autonomy levels automatically trigger stricter governance requirements, more frequent evaluations, and more sensitive enforcement policies.
Each organization in KoraSafe operates within a fully isolated tenant boundary. Isolation is enforced at three layers: the database layer (separate schemas with dedicated encryption keys), the API layer (tenant-scoped authentication tokens), and the application layer (tenant context verified on every request). No query or API call can ever access data belonging to another tenant. For enterprise customers with multiple business units, KoraSafe supports sub-tenants that maintain independent governance workflows and data isolation while providing a unified executive dashboard for organization-wide visibility.
Yes. KoraSafe is designed to fit into your existing stack, not replace it. The REST API and MCP API support programmatic access to all platform capabilities. Webhooks can push events (new violations, assessment completions, policy changes) to any HTTP endpoint. Built-in integrations include Slack and Microsoft Teams for approval workflows and alerts, SIEM platforms (Splunk, Datadog) for audit log forwarding, CI/CD pipelines (GitHub Actions, GitLab CI) for pre-deployment risk checks, and identity providers (Okta, Azure AD) for SSO. Custom integrations can be built using the API SDKs available in Python, TypeScript, and Go.
Our support team is here to help you get the most out of KoraSafe. Reach out and we will get back to you within one business day.