KvantumCI Documentation

Complete guide to using KvantumCI platform

Dashboards

Native dashboard solution with your data visualizations. Dashboard shows you the real-time data. You can see the number of total projects, total amount of integrated repositories, and overall security score (our internal scoring weighted system using rule engine) and overall statistics around findings.

You can switch between prepared dashboards in upper menu. The dashboards focus on overall DevSecOps security posture, pure security, Site Reliability topics and CI/CD issues that must be resolved. Performance dashboard is focusing on performance and time of our validation runs and scans.

DevSecOps Dashboard Overview

The DevSecOps Dashboard provides a consolidated, real-time view of your organization's project health, security posture, operational trends, and CI/CD verification activity. It is designed to give engineering, SRE, DevOps, and security teams a fast and actionable overview of system-wide risks and maturity.

Top-Level Summary Cards

License Edition: Displays the currently active product edition (e.g., Free, Team, Enterprise). This indicates available features and limits (scans, projects, and repositories).

Total Projects: Shows the number of projects that are currently onboarded into the platform. These projects contribute to pulse scores, scanning metrics, and maturity insights.

Total Repositories: Indicates how many connected Git repositories are being monitored and scanned.

Verification Runs: Shows the total number of automated verification runs (scans) executed across all projects. This includes SAST, SCA, IaC, pipeline checks, and other rule engine validations.

Project Pulse

The Project Pulse widget acts as your primary health indicator.

  • Pulse Score (Center Number): A normalized score summarizing the overall condition of your environment based on findings, severity, and trends.
  • Project Overview: Total monitored projects and number of projects currently at risk.
  • Trend Indicator: Highlights whether the security posture is improving, stable, or degrading.
  • Findings Δ: Percentage change in findings since the previous period.
  • Scans/Week: Shows scanning frequency to visualize CI/CD activity and coverage.

Results Status Distribution

A donut chart showing the distribution of scan outcomes:

  • Pass – No issues detected
  • Fail – One or more rules violated
  • Skip – Rule or test was not applicable

Severity Distribution

Shows how findings are distributed across severity levels: Critical, High, Medium, Low. This allows teams to prioritize remediation efforts based on risk impact.

Category Maturity Model

A radar chart visualizing your maturity across key DevSecOps categories: Code Security, Dependencies, Infrastructure as Code, Containers, Deployment, Release Management, Lack of CI/CD, SRE, Incorrect Configuration, and Security Problems. Each axis represents the relative capability or coverage in that category.

Verification Runs (Monthly)

A monthly time-series chart displaying total verification runs over time. This view helps you understand adoption trends, pipeline activity patterns, seasonal changes in scanning, and month-over-month improvements or regressions.

Tenant Administration

Manage your organization's settings, users, and permissions from the Tenant Administration panel. Configure team access levels, manage integrations, and control platform-wide settings.

Plan and Subscription

This section provides information about your current plan and subscription period. You can choose between a 30-day subscription or yearly discounted plans.

Billing

This part of the platform is designed to manage your subscription and billing information. In Tenant Administration, the owner can view billing history and past invoices.

Note: The Billing section is available only to the Tenant owner (Administrator role).

Integrations

KvantumCI supports multiple integrations, including multiple instances of the same platform type. Each integration uses a Named Configuration - a unique identifier with a user-friendly name that makes it easy to manage different tokens and connections.

What is Named Configuration?

A Named Configuration is an integration setup with a unique UUID and a user-assigned name. This allows you to easily manage multiple integrations (e.g., different GitLab instances or separate API tokens for different teams).

To add a new configuration, click on Add Integration, select the type from the list, and give it a descriptive name.

Supported Integrations

Available Now: GitHub, GitLab

Coming Soon: Jenkins, Azure Repos, AWS, JFrog, Nexus

Integration Details

GitLab Integration

We support standalone (community) and enterprise connector to GitLab. If you want to set up new integration: Select Add New Integration → GitLab. Name the configuration and add your API token. If you have custom domain, change the Base URL. If you are using SaaS-based GitLab leave this field without changes. Then confirm with Create.

For GitLab you must have at least the reporter role. Find more details here: https://docs.gitlab.com/user/permissions/

GitHub Integration

We support GitHub as primary platform for validation runs. If you want to set up new integration: Select Add New Integration → GitHub. Name the configuration and add your API token. If you have custom domain change the Base URL. After the integration is valid you should be able to list the branches and repos from the integration. Then confirm with Create.

For GitHub you need:

  • Read access to actions, artifact metadata, attestations api, code, commit statuses, metadata, and repository hooks
  • Read access to workflows

Projects

Projects are main management module of KvantumCI. You can sort the projects based on score or name. You can easily switch between tiles and list view of the projects.

Creating a New Project

When you want to add new project select New Project. Select optional project icon, that describes your project well and define its name. Use names that are descriptive for your project organization and avoid using special characters to reduce confusion. Then select Create project.

Project Management

When you select project that exists you should be able to get to the project page. The page allows you to run scans (validation runs). You can add new integrations (new items for scanning) via Add Repo.

Project Items (Repositories)

Project Items represent individual integrations that are attached to a project. A single project may consist of multiple components, codebases, services, or branches, and each of these is stored as a separate Project Item. This enables granular monitoring and evaluation of complex systems, especially when multiple repositories or subprojects contribute to the same application or service.

Each Project Item is evaluated independently by the rule engine during a Validation Run. Results are aggregated at the project level, allowing users to observe both high-level posture and detailed, per-component findings.

Purpose

Project Items (Repositories) are used to:

  • Connect one or more code repositories to a project.
  • Monitor separate subprojects or services within a larger system.
  • Track different branches of the same repository when needed.
  • Apply rule engine evaluation on a per-integration basis.
  • Maintain isolated scoring, findings, and trends for each integration.

This structure supports monorepos, multi-service architectures, polyglot environments, and multi-branch CI/CD workflows.

Attributes

The projects_repository table defines the storage model for project items. The table contains the following fields:

Identification
Column Type Description
id uuid Unique identifier for the repository entry.
tenant_id uuid Tenant to which the item belongs.
project_id uuid Project to which the item is linked.
author_id uuid User who created or registered the integration.
Integration Metadata
Column Type Description
tenant_integrations_id uuid Reference to an integration configuration (e.g., GitHub, GitLab, local).
name text Display name of the repository or integration.
repository_url text URL of the repository, if applicable.
branch_name varchar(255) Branch associated with the integration. Defaults to main.
Scoring and Lifecycle
Column Type Description
score integer Latest computed score for this integration, based on the most recent Validation Run.
created_at timestamp Timestamp of record creation.
updated_at timestamp Timestamp of the last modification.
deleted_at timestamp Soft-delete marker for lifecycle management.

Relationship to Projects

A project may include multiple Project Items. This structure enables:

  • Monitoring multiple repositories under one application.
  • Tracking separate packages, modules, services, or infrastructure components as distinct units.
  • Evaluating feature branches independently when required.
  • Aggregating results at the project level for dashboards and scoring models.

Each Project Item retains its own scan history, findings, and scoring data, allowing users to identify which components contribute positively or negatively to the overall project posture.

Add Repository

Add repo will offer you add repository window. You can select your named integration and branch you want to scan. You can add same branch multiple times or you can add various branches of the same repo.

If you want to remove project item (integration / repository) click on delete on the line of the project item. The data are soft deleted after 30 days.

Run Scan

Runs validation run of your target integration, with our very fast testing and signal detection engine. The scan should take few seconds before you can view the results of your repository.

Results

You will get sorted results from severity perspective, which are low, medium, high and critical. Findings name and technology they are related to. Categories are default categorization labels telling you if the problem is SRE, DevOps, Security, Code quality or another type of issue. There are two types of remediations - Improvement and Must Have. Must Have is action item we recommend to fix as soon as possible. The Improvement is something that you should put in your roadmap and it's still critical for implementation of proper DevSecOps.

Verification Run

A Verification run represents a single execution of the rule evaluation process for a specific project or repository. Each run evaluates the selected project integration against the active rules defined in the rule engine, producing findings, metrics, and scoring data. Verification runs serve as the fundamental unit of scanning activity within the platform.

Verification runs are executed through CI/CD integrations through the user interface or API, depending on implementation and organizational workflow. Each run is stored in the system for auditing, reporting, and trend analysis.

Purpose

A Verification run provides an isolated instance of assessment for:

  • Validating a single project integration at a specific point in time.
  • Applying all relevant rules based on project configuration, source type, and severity weighting.
  • Generating findings, scoring results, and metrics used across dashboards, reports, and analytics.
  • Tracking scan activity for compliance, governance, and historical trend analysis.

Each Verification run functions independently, ensuring consistent and repeatable evaluation regardless of the triggering source.

Data Model

The verifications table tracks all Verification runs. Below is an overview of the key fields and their purpose.

Primary Fields

Column Type Description
id uuid Unique identifier for the Verification run.
tenant_id uuid The tenant to which the run belongs. Ensures multi-tenant isolation.
project_id uuid The project being evaluated.
project_repository_id uuid The specific repository associated with the run.

Execution Metadata

Column Type Description
type verification_type Identifier describing the type of verification (e.g., scheduled, manual, CI-triggered).
triggered_by uuid The user or system account that initiated the run.
status verification_status Current state of the run (e.g., pending, running, completed, failed).
started_at timestamp Timestamp when processing began.
finished_at timestamp Timestamp when processing completed.
scan_duration integer Total duration of the scan in seconds.

Additional Data

Column Type Description
metadata jsonb Stores contextual information, such as environment data, integration details, or rule engine output.
created_at timestamp Timestamp when the record was created.
updated_at timestamp Timestamp of the last update to the record.
deleted_at timestamp Marks soft-deleted records for lifecycle management.

Execution Flow

A Verification run follows a structured lifecycle:

  1. Initialization: The run is created with status pending. Metadata such as triggering user, project reference, and scan type is recorded.
  2. Execution: The rule engine evaluates the project or repository. All active rules applicable to the project's source type are executed.
  3. Data Collection: Results are captured, including rule outcomes, findings, scoring values, and any supporting metadata.
  4. Completion: The run status changes to completed or failed. Duration is measured and stored.
  5. Availability for Analysis: Completed runs feed into dashboards, reports, history views, scoring models, and project-level insights.

Relation to Project Scoring

Each Verification run contributes to the project's security posture and overall activity trends. The rule engine assigns scores based on:

  • Rule outcomes
  • Rule weight
  • Category-level classifications
  • Aggregated scoring logic

The final score reflects the outcome of the latest successful Verification run and is used throughout the system.

Usage Scenarios

Verification runs support several operational workflows:

  • Validation through CI/CD pipelines
  • Scheduled periodic posture checks
  • Manual rescans initiated by engineers or security teams
  • Post-remediation verification to ensure fixes are applied
  • Historical analysis for compliance and audit purposes
Summary: A Verification run represents the atomic execution unit of the platform's rule evaluation system. It provides consistent, traceable scanning results that form the foundation for scoring, reporting, and monitoring DevSecOps posture across projects and repositories.

Findings

Purpose

The Finding Details Page displays the full context, evidence, and recommendations for a specific result identified during a project or repository scan. It serves as the central view for triaging, remediating, and documenting security, compliance, and DevOps-related issues.

Navigation

Accessed via: Project → Repository → Results → Finding

Each finding is a standalone page with contextual metadata, analysis, and automated fix suggestions.

Layout Overview

Section Description
Header Breadcrumbs Displays the current navigation path (Project → Repository → Result). Helps users quickly locate the finding's origin.
Criticality Card Shows the severity level of the finding (e.g., Critical, High, Medium, Low). Severity is determined by the platform's risk scoring model.
Status Card Displays the current resolution status of the finding (Active, In Progress, Resolved). Users can change the status if authorized.
Categories Tags used to group findings (e.g., Lack CI/CD, Misconfiguration, Secret Exposure). Supports filtering and search.
Technologies Lists technologies or platforms related to the finding (e.g., GitHub, GitLab, AWS, Terraform).
Description Section Provides a short summary of the issue and its potential impact.
Recommendation Section Offers detailed remediation guidance. Recommendations are automatically generated by rules or AI modules.
Risk Section Describes the possible consequences if the issue remains unresolved.
Evidence Section Displays proof collected by the scanning engine (e.g., missing files, misconfigurations, or code snippets).
References Section Provides relevant documentation links, best practice resources, and vendor guidance.
AI Recommended Fix Button Opens a contextual assistant that provides auto-generated example fixes, which can include: Code snippets (e.g., missing configuration files), CI/CD workflow examples, Container or infrastructure configuration fixes

Finding Statuses

Findings can have the following statuses:

  • Fail - Multiple different levels of fail can be introduced. That means that our rules are dynamic and have deterministic but dynamic failing.
  • Skip - Skipped because of missing data or irrelevant technology.
  • Pass - Signals and data in your DevOps tooling, code, or cloud were found.

Findings Explorer

Purpose

The Findings Explorer provides a visual and interactive way to analyze scan results across projects, repositories, and pipelines. It allows users to trace findings back to their source, understand their context, and access detailed remediation guidance directly within the graph view.

Page Overview

The page consists of three main sections:

  • Filters Panel (left sidebar)
  • Graph View (Findings Map) (center canvas)
  • Finding Details Panel (right sidebar)

1. Filters Panel

Located on the left side, the Filters Panel allows users to refine visible findings based on severity and status.

Filter Description
Severity Filters findings by their criticality level: Critical, High, Medium, Low
Status Filters findings by their result status: Pass (validation run rule returns passing criteria), Skip (skipped for several reasons), Fail (failing checks means missing CI/CD tooling, jobs, integrations or practices in DevSecOps area)

Filtering updates the graph view in real time, helping users focus on the most relevant or high-priority findings.

2. Graph View (Findings Map)

The central canvas visualizes project structure, pipeline runs, and associated findings in an interactive graph format.

Element Description
Project Node Represents a single project or repository being analyzed.
Integration Node Shows connected CI/CD platform (e.g., GitLab, GitHub).
Pipeline Node Displays specific pipeline runs or workflow executions. Duration is shown in seconds.
Finding Nodes Represent individual scan results or rule triggers. Color-coded by severity and shaped by category.

Interactions:

  • Click a node to expand or collapse related items.
  • Select a finding node to open the Finding Details panel on the right.
  • Use mouse drag and scroll to pan and zoom the graph.
  • Click "Center" (bottom control) to re-focus the graph view.

This view helps visualize dependencies and understand how findings relate to pipelines, integrations, and projects.

3. Finding Details Panel

The right-hand panel displays in-depth information for the selected finding. It provides context, impact, and guidance for remediation.

Section Description
Criticality & Status Displays severity and current resolution state of the finding.
Finding ID A unique identifier assigned to the finding for traceability.
Title Short descriptive name of the detected issue (e.g., Missing .dockerignore Files).
Description Explains the nature of the issue, how it was detected, and its potential impact.
Recommendation Lists actionable remediation steps and best practices.
Evidence (if available) Displays collected data or files that triggered the finding.
References Provides external resources and documentation links for further reading.

Rule Engine

Overview

The Rule Engine is the core of KvantumCI. It offers various rules that are versioned, every rule also has its own metadata. Every rule has assigned weight from 0 to 100% (displayed as 0 to 1). This affects the score calculation. By default all the rules are enabled, so users do not need to change them.

To disable a rule, set the weight (bias) value to zero. Each rule's influence on the score changes based on its weight setting.

Each rule is defined by a unique identifier, version number, and associated severity level. The source type specifies the integration or platform that the rule supports. Rules can be either generic, applying across multiple integrations, or platform-specific, designed to address features unique to a particular environment (for example, GitLab-specific configuration rules).

Rule Sets

Different modules add new rule sets. You can search the rules across the rule sets based on the keywords. Every rule has assigned several permutations of findings. These findings can be different types of fails, skips or pass. Rule sets organize containers for general evaluation areas for validation runs / scans.

Every rule set contains information about average weight and amount of rules. The names of the Rule sets can be changed. Every rule set has also description.

Rule Weight

The Rule Weight parameter determines how strongly a rule influences the final Project Score. Each rule can be assigned a weight on a scale from 0 to 100% (displayed as percentages from 0 to 1), allowing teams to fine-tune the importance of individual checks based on organizational priorities.

Weight Scale Definition

  • 0 (0%) — Disabled: A rule with weight 0 is still executed, but its result does not contribute to the final score. This is suitable for rules that should remain visible but have no impact on scoring.
  • 0.01 to 0.99 (1% to 99%) — Partial Influence: Weights between 0.01 and 0.99 represent incremental levels of influence. A rule with weight 0.5 (50%) contributes half as much to the score as a rule with weight 1.0 (100%). Lower weights should be assigned to rules that are helpful but not essential for overall posture.
  • 1.0 (100%) — Maximum Importance: A rule with weight 1.0 has full influence on the Project Score. Findings from these rules have the strongest impact and represent controls considered critical for your environment.

When to Adjust Rule Weight

Rule Weight allows organizations to align the scoring model with internal policies. The following guidelines can be used as a reference:

Rule Type Typical Weight Range
Informational or low-priority checks 0–0.2 (0%–20%)
Standard compliance or quality controls 0.3–0.6 (30%–60%)
High-risk areas (dependencies, IaC misconfigurations, security boundaries) 0.7–0.9 (70%–90%)
Mandatory or business-critical security controls 1.0 (100%)

These ranges can be adapted according to the maturity, regulatory requirements, and risk appetite of the organization.

Configuring the Rule Weight

Users can adjust the weight in the Edit Rule view:

  1. Open the desired rule from the rule list.
  2. Locate the Weight section.
  3. Set a value between 0 and 1 (or 0% to 100%) using the slider or numerical control.
  4. Save the configuration by selecting Update Rule.

The chosen weight is applied immediately to all future scoring calculations.

Example

For the rule ".gitignore is present":

  • Weight 1.0 (100%): Absence of a .gitignore file significantly reduces the project score.
  • Weight 0.5 (50%): The finding has moderate influence.
  • Weight 0.1 (10%): The influence is minimal.
  • Weight 0 (0%): The rule has no effect on scoring but remains visible in results.
Administrative Notes: Internally, weights are stored as decimal values from 0.0 to 1.0 and evaluated as percentages (0% to 100%) in the user interface. Rules with weight 0 (0%) are excluded from all scoring calculations but remain part of the evaluation pipeline. The weighting system provides consistent cross-category scoring for areas such as Containers, SAST, SCA, Infrastructure as Code, Git hygiene, and secret detection.