Skip to main content
Kliper includes twenty integrated security tools that let assessors validate technical controls directly from the assessment workbench. Each tool targets a specific PCI DSS requirement, produces structured findings, and can auto-fill assessment answers with a single click.
ToolPCI RequirementPurpose
Port & Service Scanner1.2.1Scan targets with nmap or import XML results to inventory open ports and services
SSL/TLS Checker4.2.1Validate certificate grades, protocols, and cipher suites
CVE Lookup + EPSS6.3Search the NVD for known vulnerabilities with exploit probability scores
ASV Scan Import11.3.2Upload and parse Qualys/Tenable/Rapid7 scan results
Patch Management6.3.3Upload WSUS/SCCM/Qualys CSV reports to verify patch compliance
Log Audit Validator10.2Upload log samples to verify required audit trail fields are present
Access Review7.2.1, 7.2.5, 8.6.1Upload AD/Azure AD/AWS IAM exports to detect access control issues
Pen Test Parser11.4Upload and parse Burp Suite, Nessus, or ZAP reports
Headers & DNS2.2.5Check HTTP security headers and DNS records
Payment Page Script Monitor11.6.1Live-scan payment pages or import script inventories to verify SRI and CSP controls
Firewall Rule Analyzer1.2.5, 1.3.1, 1.3.2Upload firewall rule exports to detect any-any rules, deprecated protocols, and overly broad CIDRs
Password Policy Analyzer8.3.6, 8.3.7, 8.3.9Upload AD GPO, Azure AD, AWS IAM, or CSV policy exports and check against PCI password requirements
CISA KEV Tracker6.3, 11.3Track CISA Known Exploited Vulnerabilities catalog with search, ransomware filter, and vendor breakdown
Secret Scanner6.2, 6.3Scan public Git repositories for leaked secrets, API keys, and credentials using Gitleaks
Credential Leak Monitor8.3, 8.6Check domains and companies against the Have I Been Pwned breach database
Threat Briefing6.3, 11.3Aggregated threat intelligence from NVD, CISA KEV, EPSS, and HIBP sources
Anti-Malware Deployment Checker5.2.1, 5.3.1, 5.3.2Upload endpoint protection reports to verify agent deployment, signature freshness, and real-time protection
FIM Report Parser11.5.1, 11.5.2Upload file integrity monitoring reports to identify unauthorized changes to critical system files
Remediation DashboardAggregated view of all findings across all tools

Accessing Security Tools

1

Open Your Assessment

Navigate to an assessment from the Engagement Hub or the Assessments page.
2

Open the Security Tools Tab

In the assessment workbench, open the Security dropdown in the top navigation bar and select Security Tools. The Security Tools panel opens with a sidebar listing all fifteen tools.
3

Select a Tool

Click a tool in the left sidebar to switch between them. Each tool operates independently — results from one tool do not affect another.

Port & Service Scanner

The Port & Service Scanner inventories open ports and running services on target hosts. It supports live nmap scanning directly from Kliper and importing existing nmap XML output files.

Running a Live Scan

1

Switch to Live Scan Mode

In the Port Scanner tab, ensure the Live Scan toggle is selected (default).
2

Enter a Target

Type a target IP address, hostname, or CIDR range (e.g., 192.168.1.0/24) in the input field.
3

Click Start Scan

The scan runs server-side using nmap with service version detection (-sV -sS --open). Scans typically complete in 30–120 seconds depending on the target range.
4

Review Results

The result card displays:
FieldDescription
HostsNumber of live hosts discovered
Open PortsTotal number of open ports across all hosts
PCI IssuesCount of risky ports/services flagged
PCI CompliancePASS or FAIL badge
Expand the card to see every open port with host, port number, protocol, service name, version, and risk level.
5

Apply to Requirement

Click Apply to Req 1.2.1 to auto-fill the assessment answer with a summary of open ports, risky services, and compliance status.

Importing nmap XML

Switch to the Import XML toggle, then select an nmap XML output file. The parser extracts the same host, port, service, and version data as a live scan.

Risky Port Detection

The scanner flags these commonly risky ports and services:
PortServiceRisk
21FTPUnencrypted file transfer
23TelnetUnencrypted remote access
445SMBFile sharing — common attack vector
3306MySQLDatabase exposed externally
3389RDPRemote desktop — brute force target
5432PostgreSQLDatabase exposed externally
6379RedisIn-memory store — often unauthenticated
Live scans require nmap to be installed on the Kliper server. Only scan targets you are authorized to scan. Unauthorized port scanning may violate network policies or laws.

SSL/TLS Checker

The SSL/TLS Checker validates a domain’s certificate configuration and assigns a letter grade (A through F). It uses SSL Labs for detailed analysis with a direct TLS fallback when SSL Labs is unavailable.

Running a Check

1

Enter the Domain

Type the domain name (e.g., example.com) in the input field. Do not include https:// — the checker adds it automatically.
2

Click Run Check

The check runs server-side. SSL Labs analysis may take 30–60 seconds; if SSL Labs is overloaded, the system falls back to a direct TLS connection check that completes in under 5 seconds.
3

Review Results

The result card displays:
FieldDescription
GradeLetter grade badge (A+, A, B, C, D, F) color-coded green through red
PCI CompliancePASS or FAIL badge — FAIL if grade is below B, TLS < 1.2, or weak ciphers detected
Certificate IssuerThe certificate authority (e.g., Let’s Encrypt, DigiCert)
Valid UntilCertificate expiration date
ProtocolSupported TLS versions
Cipher SuiteActive cipher suites
PCI IssuesSpecific problems that affect PCI compliance (e.g., “TLS 1.0 enabled”, “Weak cipher suites”)
4

Apply to Requirement

Click Apply to Req 4.2.1 to auto-fill the assessment answer with a structured summary of the check results, including grade, compliance status, and identified issues.

Check History

Previous checks are listed below the input form with the domain, grade, date, and PCI status. Expand any previous check to view its full results or re-apply it to the assessment.
SSL Labs results are cached by SSL Labs itself. If you need a fresh analysis, wait a few minutes between checks of the same domain.

CVE Vulnerability Lookup

The CVE Lookup tool searches the NIST National Vulnerability Database (NVD) for known vulnerabilities affecting a specific software product and version. Results are cached locally to reduce API calls and improve response times.

Searching for Vulnerabilities

1

Enter Product and Version

Type the software product name (e.g., Apache HTTP Server) and version (e.g., 2.4.49) in the input fields.
2

Click Search

The system queries the NVD API. Results typically return within 2–5 seconds, or instantly if the product/version combination has been searched before (cached for 7 days).
3

Review CVEs

Each CVE result displays:
FieldDescription
CVE IDThe unique identifier (e.g., CVE-2021-41773)
SeverityColor-coded badge — Critical (red), High (orange), Medium (yellow), Low (blue)
CVSS ScoreNumerical score from 0.0 to 10.0
DescriptionSummary of the vulnerability
PublishedDate the CVE was published
4

Apply to Requirement

Click Apply to Req 6.3 to auto-fill the assessment answer with a summary of discovered CVEs, including severity counts and CVSS scores.

Lookup History

All previous lookups are displayed below the search form with product, version, CVE count, and date. Expand any previous lookup to review its results or re-apply to the assessment.
Good candidates for CVE lookup include web servers (Apache, Nginx), databases (MySQL, PostgreSQL), frameworks (Node.js, Spring), and libraries (OpenSSL, jQuery, Log4j).

Patch Management

The Patch Management tool parses CSV exports from patch management systems — WSUS, SCCM, Qualys, and generic formats — to verify that security patches are applied within PCI DSS timelines.

Uploading a Patch Report

1

Select the CSV File

Click Upload CSV and select a .csv file exported from your patch management tool.
2

Automatic Vendor Detection

The system auto-detects the vendor format from CSV column headers:
VendorDetection MethodKey Columns
WSUSUpdateTitle or KBArticle headerUpdateTitle, KBArticle, MsrcSeverity, ReleaseDate, Status
SCCMCI_UniqueID or ComplianceState headerTitle, ArticleID, Severity, ComplianceState
QualysQID and Patch headersQID, Title, Severity, Status
GenericFallbackBest-effort column matching for patch name, severity, status, dates
3

Review Results

The result card shows:
FieldDescription
Total PatchesNumber of patches in the report
Missing CriticalCount of uninstalled critical patches
Missing HighCount of uninstalled high-severity patches
Overdue (>30d)Patches not installed within 30 days of release
VendorDetected patch management vendor
PCI CompliancePASS if no critical/high patches are overdue beyond 30 days
Expand the card to see every patch with name, KB article, severity, release date, install status, and days overdue.
4

Apply to Requirement

Click Apply to Req 6.3.3 to auto-fill the assessment answer with patch compliance status, missing patch counts, and overdue details.
PCI DSS Requirement 6.3.3 requires that critical and high-severity security patches be installed within one month of release. The tool flags any patch exceeding this 30-day threshold.

Log Audit Validator

The Log Audit Validator analyzes log samples to verify that all six PCI DSS Requirement 10.2 audit trail fields are present. It supports syslog (RFC 3164 and 5424), Windows Event XML, Windows Event CSV, JSON lines, and generic CSV formats.

Uploading a Log Sample

1

Select the Log File

Click Upload Log File and select a log file (.log, .txt, .json, .csv, or .xml).
2

Automatic Format Detection

The system auto-detects the log format:
FormatDetection Method
Syslog (RFC 3164)Lines starting with timestamp pattern Mon DD HH:MM:SS
Syslog (RFC 5424)Lines starting with <pri>version and ISO 8601 timestamps
Windows Event XMLContains <Event> or <EventData> tags
Windows Event CSVCSV with EventID and Source/Level columns
JSON LinesLines starting with { that parse as valid JSON
Generic CSVCSV with comma-separated values and a header row
3

Review Results

The result card shows a field completeness score (e.g., 5/6) and checks each of the six required PCI audit trail fields:
Required FieldPCI ReferenceWhat It Looks For
User Identification10.2.1Username, UID, account name, actor
Event Type / Action10.2.1Event ID, action, category, facility
Date and Time10.2.1Timestamps in ISO 8601, syslog, or epoch format
Success / Failure10.2.1Status codes, success/failure/denied keywords
Event Origination10.2.1Source IP, client address, hostname
Affected Resource10.2.1Target object, destination, file path, endpoint
Each field displays a confidence level (high, medium, low, or none) and sample values extracted from the log.
4

Apply to Requirement

Click Apply to Req 10.2 to auto-fill the assessment answer with the log format, field completeness, confidence levels, and compliance determination.
Upload a representative sample of 50–500 log entries rather than full log archives. The validator samples up to 500 entries for analysis — larger files are parsed but only the first 500 entries are evaluated for field detection.

Access Review

The Access Review tool parses user account exports from identity providers to detect inactive accounts, shared/generic accounts, excessive privileges, and missing MFA — all key requirements under PCI DSS Requirements 7 and 8.

Uploading a User Export

1

Select the CSV File

Click Upload CSV and select a .csv file exported from your directory service or identity provider.
2

Automatic Source Detection

The system auto-detects the identity provider:
SourceDetection MethodKey Columns
Active DirectorySamAccountName or LastLogonDate headerSamAccountName, Enabled, LastLogonDate, PasswordLastSet, MemberOf
Azure ADUserPrincipalName or AccountEnabled headerUserPrincipalName, AccountEnabled, LastSignInDateTime, AssignedRoles, MFAStatus
AWS IAMarn and password_last_used headersuser, arn, password_enabled, password_last_used, mfa_active
GenericFallbackBest-effort column matching for username, status, last login, groups, MFA
3

Review Results

The result card shows summary statistics and a PCI compliance determination:
MetricDescription
Total AccountsNumber of user accounts in the export
Inactive (>90d)Enabled accounts with no login in over 90 days (violates Req 8.1.4)
Shared AccountsGeneric/shared accounts like admin, test, service (violates Req 8.5)
Elevated AccessAccounts with admin/privileged roles (review per Req 7.2.1)
No MFAAccounts without multi-factor authentication (violates Req 8.4.2)
Expand the card to see a detailed findings table with each flagged account, the finding type, risk level, last login date, MFA status, and admin status.
4

Apply to Requirements

Click Apply to Req 7.2 & 8.6 to auto-fill three assessment answers simultaneously:
RequirementWhat Is Filled
7.2.1Access privileges assigned based on job classification and function
7.2.5Access privileges reviewed at least semi-annually
8.6.1System or application accounts managed based on least privilege

Finding Types and Risk Levels

FindingRisk LevelPCI Reference
No MFA on admin accountCritical8.4.2
Inactive account (>90 days)High8.1.4
Shared/generic accountHigh8.5
No MFA (non-admin)High8.4.2
Excessive privilegeMedium7.2.1
Password expired (>90 days)Medium8.3.9
The Access Review tool flags potential issues based on pattern matching (e.g., usernames matching “admin”, “shared”, “service”). Always verify flagged accounts manually — some service accounts may be legitimate and properly managed.

ASV Scan Import

The ASV Scan Import tool parses CSV exports from Approved Scanning Vendors — Qualys, Tenable (Nessus), and Rapid7 — and converts them into structured findings with PCI compliance determination.

Uploading a Scan

1

Fill In Scan Details

Enter the scan metadata:
  • Scan Date — when the scan was performed
  • Quarter — the PCI quarter this scan covers (e.g., Q1 2026)
  • Vendor — select Qualys, Tenable, Rapid7, or Generic (auto-detected if left as Auto)
2

Select the CSV File

Click the file input to select a .csv file exported from your ASV scanning tool.
3

Click Upload & Parse

The system detects the vendor format from the CSV column headers and parses each row into a normalized finding with host, port, severity, CVSS score, and remediation guidance.
4

Review Results

The result card shows:
  • PASS / FAIL badge — FAIL if any finding has CVSS score >= 4.0
  • Host count — number of unique hosts scanned
  • Vulnerability count — total number of findings
  • Severity breakdown — badge counts for Critical, High, Medium, Low, Info
5

Apply to Requirement

Click Apply to Req 11.3.2 to auto-fill the assessment answer with scan summary, compliance status, host count, and severity breakdown.

Supported Vendor Formats

VendorDetection MethodKey Columns
QualysColumn header contains QIDIP, DNS, QID, Title, Severity, CVSS, Port, Protocol, CVE ID, PCI Vuln
Tenable (Nessus)Column header contains Plugin IDPlugin ID, CVE, CVSS, Risk, Host, Port, Name, Synopsis, Solution
Rapid7Column header contains Vulnerability IDVulnerability ID, Asset IP, Asset Names, Severity, CVSS Score, Title
Generic CSVFallback formatBest-effort column matching — looks for host, port, severity, cvss, title, description

Managing Findings

Expand a scan result to view all findings. Each finding row displays:
  • Severity badge — color-coded (Critical, High, Medium, Low, Info)
  • Title — vulnerability name
  • Host and Port — affected asset
  • CVSS Score — numerical risk score
  • Remediation Status — dropdown to mark as Open, In Progress, Fixed, or Accepted Risk
  • False Positive — toggle to flag false positives (excluded from compliance calculation)
Use the severity filter dropdown to focus on specific severity levels.
PCI DSS requires that all vulnerabilities with CVSS score 4.0 or higher are resolved for a passing ASV scan. Findings marked as False Positive are excluded from this calculation, but the assessor must document the justification.

Penetration Test Parser

The Pen Test Parser imports results from common penetration testing tools and normalizes findings into a unified format. It supports three major formats and a generic CSV fallback.

Uploading Test Results

1

Fill In Test Details

Enter the penetration test metadata:
  • Test Type — External, Internal, or Segmentation
  • Test Date — when the test was performed
  • Tester Name — the person or firm that conducted the test
  • Tool — select Burp Suite, Nessus, OWASP ZAP, or Generic (leave as Auto-detect for automatic format detection)
2

Select the Report File

Click the file input to select an .xml or .csv file exported from the penetration testing tool.
3

Click Upload & Parse

The system auto-detects the file format:
FormatDetection
Burp Suite XML.xml file with <issues> root element
OWASP ZAP XML.xml file with <OWASPZAPReport> root element
Nessus CSV.csv file with Plugin ID column header
Generic CSV.csv file — best-effort column matching
Findings are extracted and normalized with severity, confidence, host, port, CVE/CWE references, and remediation guidance.
4

Review Results

The result card shows:
  • PASS / FAIL badge — FAIL if any Critical or High findings exist
  • Tool detected — which parser was used (Burp, Nessus, ZAP, Generic)
  • Test type — External, Internal, or Segmentation
  • Severity breakdown — badge counts for High, Medium, Low
5

Apply to Requirement

Click Apply to Req 11.4 to auto-fill the assessment answer. The auto-fill maps the test type to the correct sub-requirement:
Test TypeTarget Sub-Requirement
External11.4.3 (External penetration testing)
Internal11.4.2 (Internal penetration testing)
Segmentation11.4.5 (Segmentation penetration testing)

Managing Findings

Expand a result to view all findings with severity filter and pagination. Each finding displays:
  • Severity badge — High (orange), Medium (yellow), Low (blue)
  • Title — vulnerability name
  • Host — target URL or IP
  • Confidence — Certain, Firm, or Tentative
  • Remediation Status — dropdown to track fix progress
  • Expandable detail — full description and recommended remediation (click the finding row)
Informational findings are parsed and stored but excluded from the PASS/FAIL determination and severity badge counts. Only Critical, High, Medium, and Low findings affect compliance status.

HTTP Header & DNS Checker

The Header & DNS Checker validates HTTP security headers and DNS security records for a domain, assigning a letter grade (A through F) and identifying PCI-relevant configuration gaps. All checks run server-side using Node.js built-ins — no external API dependencies.

Running a Check

1

Enter the Domain

Type the domain name (e.g., example.com) in the input field.
2

Click Run Check

The system performs two checks in parallel:
  1. HTTP headers — makes an HTTPS request to the domain and evaluates the response headers
  2. DNS records — queries DNS for SPF, DMARC, and CAA records
3

Review Results

The result card displays a grade badge and a detailed checklist of all checks.
4

Apply to Requirement

Click Apply to Req 2.2.5 to auto-fill the assessment answer with the grade, header status summary, DNS record findings, and PCI compliance status.

HTTP Security Headers

HeaderExpected ValueStatus if Missing
Strict-Transport-Security (HSTS)Present with max-age >= 31,536,000Fail
Content-Security-Policy (CSP)Present (warn if contains unsafe-inline or unsafe-eval)Fail
X-Content-Type-OptionsnosniffFail
X-Frame-OptionsDENY or SAMEORIGINFail
Referrer-PolicyPresentWarn
Permissions-PolicyPresentWarn
Cache-ControlContains no-store or no-cacheWarn

DNS Security Records

RecordWhat Is CheckedStatus if Missing
SPFTXT record starting with v=spf1Warn
DMARCTXT record at _dmarc.{domain}Warn
CAACertificate Authority Authorization recordsWarn

Grading

The overall grade is calculated from the pass/warn/fail distribution:
GradeCondition
AAll checks pass
BAll checks pass or warn (no failures)
C1–2 failed checks
D3 or more failed checks
FCritical failures (missing HSTS or missing CSP)

PCI Compliance

The check is marked PCI Fail if any of these critical headers are missing:
  • Strict-Transport-Security (HSTS)
  • Content-Security-Policy (CSP)
  • X-Frame-Options
For subdomains, DNS records like SPF and DMARC are typically configured on the root domain. Missing SPF/DMARC on a subdomain is reported as a warning, not a failure.

Payment Page Script Monitor

The Payment Page Script Monitor addresses PCI DSS 4.0.1 Requirement 11.6.1 — a brand-new requirement that mandates monitoring and integrity verification of all scripts loaded on payment pages. The tool supports two modes: live scanning a URL and importing a CSV script inventory.

Live Scanning a Payment Page

1

Enter the Payment Page URL

Type the full URL of the payment page (e.g., https://shop.example.com/checkout) in the URL input field.
2

Click Scan URL

The system fetches the page server-side, extracts all <script> tags, checks for Subresource Integrity (SRI) hashes, and inspects the Content-Security-Policy (CSP) header.
3

Review Results

The result card displays:
FieldDescription
Total ScriptsNumber of scripts found on the page
Third-PartyScripts loaded from external domains
Missing SRIScripts without integrity attribute
UnauthorizedScripts not in the approved inventory
CSP HeaderWhether a Content-Security-Policy header is present
PCI CompliancePASS or FAIL badge
Expand the card to see each script with its URL, type (external/inline), domain, SRI status, CSP allowlist status, and risk level.
4

Apply to Requirement

Click Apply to Req 11.6.1 to auto-fill the assessment answer with a summary of the scan results, including script counts, SRI coverage, CSP presence, and compliance status.

Importing a Script Inventory

Switch to Import CSV mode to upload a CSV file with columns such as script_url, domain, has_sri, approved, and notes. The system parses the inventory, detects third-party scripts, and evaluates compliance based on SRI and approval status.

PCI Compliance Logic

ConditionResult
Third-party script without SRI integrity hashFAIL
Script not approved in inventoryFAIL
No CSP header presentWARN
All scripts have SRI and are approvedPASS
Requirement 11.6.1 is new in PCI DSS 4.0.1 and becomes mandatory on March 31, 2025. It requires that all scripts on payment pages are authorized, integrity-verified, and inventoried.

Firewall Rule Analyzer

The Firewall Rule Analyzer parses firewall rule exports and flags PCI-relevant violations such as any-any rules, deprecated protocols, overly broad CIDRs, and dangerous ports without source restriction.

Uploading Firewall Rules

1

Select the Rules File

Click Upload Rules File and select a .txt, .conf, .xml, or .json file exported from your firewall.
2

Automatic Format Detection

The system auto-detects the firewall format:
FormatDetection Method
iptablesLines starting with *filter, :INPUT, or -A
Cisco ACLLines containing access-list, permit, or deny
pfSense XMLContains <filter> and <rule> XML elements
AWS Security GroupsJSON with SecurityGroups and IpPermissions keys
3

Review Results

The result card shows:
FieldDescription
Total RulesNumber of rules parsed
Allow RulesCount of permit/accept rules
Deny RulesCount of deny/drop/reject rules
FlaggedCount of rules with PCI violations
FormatDetected firewall format
PCI CompliancePASS or FAIL badge
Expand the card to see every rule with rule number, source, destination, port, protocol, action, violation type, and risk level.
4

Apply to Requirements

Click Apply to Req 1.2.5 & 1.3 to auto-fill the assessment answers for traffic rules documentation (1.2.5), inbound restrictions (1.3.1), and outbound restrictions (1.3.2).

Violation Types

ViolationRisk LevelDescription
Any-Any RuleCriticalRule allows all traffic from any source to any destination
Deprecated ProtocolHighTelnet (23), FTP (21), or TFTP (69) allowed
Dangerous PortHighRDP (3389) or SMB (445) without source restriction
Broad CIDRMediumAllow rule with /8 or wider source/destination
No Default DenyMediumNo explicit deny-all rule at the end of the chain
The analyzer parses text-based rule exports. It does not connect to live firewalls. Ensure the exported rules represent the current running configuration.

Password Policy Analyzer

The Password Policy Analyzer checks password policy exports against PCI DSS 4.0.1 Requirements 8.3.6, 8.3.7, and 8.3.9. It supports exports from Active Directory Group Policy, Azure AD, AWS IAM, and generic CSV checklists.

Uploading a Policy Export

1

Select the Policy File

Click Upload Policy File and select a .inf, .txt, .json, or .csv file exported from your identity provider.
2

Automatic Source Detection

The system auto-detects the policy source:
SourceDetection MethodKey Fields
AD GPOLines matching MinimumPasswordLength = N patternMinimumPasswordLength, PasswordComplexity, PasswordHistorySize, MaximumPasswordAge, LockoutBadCount
Azure ADJSON with PasswordPolicy or ConditionalAccess keysMinimumLength, RequireUppercase, MaxPasswordAge, LockoutThreshold
AWS IAMJSON with MinimumPasswordLength + RequireUppercaseCharactersMinimumPasswordLength, RequireNumbers, MaxPasswordAge, PasswordReusePrevention
CSV ChecklistCSV with policy_name, value columnsGeneric key-value pairs
3

Review Results

The result card shows six summary cards (Min Length, Complexity, History, Max Age, Lockout Threshold, Lockout Duration) and a detailed checklist:
CheckPCI RequirementExpected Value
Minimum length8.3.612 or more characters
Complexity (alpha + numeric)8.3.6Both required
Password history8.3.7Last 4 not reusable
Maximum age8.3.990 days or less
Lockout threshold8.3.410 or fewer attempts
Lockout duration8.3.430 minutes or more
First-login change8.3.9Required
Each check shows the expected value, actual value, severity, and pass/fail status.
4

Apply to Requirements

Click Apply to Req 8.3 to auto-fill the assessment answers for password complexity (8.3.6), password history (8.3.7), and password change frequency (8.3.9).
For Active Directory, export the password policy with secedit /export /cfg policy.inf /areas SECURITYPOLICY from a domain controller. The resulting .inf file is directly supported by the analyzer.

Anti-Malware Deployment Checker

The Anti-Malware Deployment Checker parses endpoint protection reports (CSV exports from AV/EDR tools) to verify agent deployment coverage, signature freshness, scan frequency, and real-time protection status across all endpoints.

Uploading an Endpoint Report

1

Select the CSV File

Click Upload CSV and select a .csv file exported from your endpoint protection platform.
2

Automatic Vendor Detection

The system auto-detects the AV/EDR vendor from the CSV headers:
VendorDetection MethodKey Columns
Windows DefenderAMRunningMode or Signature Date + Real-Time Protection headersComputerName, AMRunningMode, AntivirusSignatureLastUpdated, LastFullScanStartTime, RealTimeProtectionEnabled
CrowdStrikeAgentVersion + LastSeen headersHostname, Status, AgentVersion, LastSeen, OperatingSystem
SentinelOneThreatCount + IsActive headersComputerName, IsActive, AgentVersion, LastActiveDate, ScanStatus
Generic CSVFallbackhostname, agent_status, signature_date, last_scan, realtime_protection
3

Review Results

The result card shows:
FieldDescription
Total EndpointsNumber of endpoints in the report
CompliantEndpoints with agent running, fresh signatures, and real-time protection enabled
Agents DownEndpoints where the protection agent is not running
Signatures OutdatedEndpoints with signatures older than 7 days
Real-Time DisabledEndpoints without real-time protection
VendorDetected AV/EDR vendor
PCI CompliancePASS or FAIL badge
Expand the card to see each endpoint with hostname, agent status, agent version, signature age, last scan date, real-time protection status, OS, risk level, and compliance status.
4

Apply to Requirements

Click Apply to Req 5.2 & 5.3 to auto-fill the assessment answers for anti-malware deployment (5.2.1), keeping definitions current (5.3.1), and periodic scans with real-time protection (5.3.2).

PCI Compliance Logic

ConditionResult
Agent not running on any endpointFAIL
Signatures older than 7 daysFAIL
Real-time protection disabledFAIL
No scan in over 7 daysWARN
All endpoints: agent running + fresh signatures + real-time enabledPASS
The tool supports human-readable CSV headers (e.g., “Agent Status”, “Signature Date”, “Real-Time Protection”) as well as system-generated column names (e.g., AMRunningMode, AntivirusSignatureLastUpdated). Both formats are auto-detected.

FIM Report Parser

The FIM (File Integrity Monitoring) Report Parser analyzes change logs from file integrity monitoring tools to identify unauthorized modifications to critical system files — a key control under PCI DSS Requirements 11.5.1 and 11.5.2.

Uploading a FIM Report

1

Select the Report File

Click Upload Report and select a .json, .jsonl, .csv, .txt, or .log file exported from your FIM tool.
2

Automatic Tool Detection

The system auto-detects the FIM tool format:
ToolDetection MethodFormat
OSSEC / WazuhJSON with syscheck.path and syscheck.event fieldsJSON lines
TripwireCSV with Object Name, Object Type, Severity columnsCSV
AIDELines matching File: /path + Changed: ... patternText report
Generic CSVFallback — columns like file_path, change_type, timestampCSV
3

Review Results

The result card shows:
FieldDescription
Total ChangesNumber of file change events detected
Critical ChangesChanges to critical system files (see list below)
UnauthorizedChanges not marked as authorized
AuthorizedChanges with an authorization record
Files MonitoredTotal number of unique files in the report
ToolDetected FIM tool
PCI CompliancePASS or FAIL badge
Expand the card to see each change event with file path, change type (added/modified/deleted), timestamp, critical file flag, authorization status, hash values (before/after), and risk level.
4

Apply to Requirements

Click Apply to Req 11.5 to auto-fill the assessment answers for change-detection deployment (11.5.1) and alerting on unauthorized modifications (11.5.2).

Critical File Detection

The parser flags modifications to these critical system files:
PlatformCritical Paths
Linux/etc/passwd, /etc/shadow, /etc/sudoers, /etc/ssh/sshd_config, /boot/*, /usr/bin/sudo, /usr/sbin/*, /etc/crontab
WindowsC:\Windows\System32\*, boot.ini, ntoskrnl.exe, SAM, SECURITY, SYSTEM (registry hives)

PCI Compliance Logic

ConditionResult
Critical system file changed without authorizationFAIL
Any unauthorized file modification detectedFAIL
No FIM alerts at all (possible gap in monitoring coverage)WARN
All changes authorized and critical files monitoredPASS
The FIM parser evaluates the authorized field in the report data. Ensure your FIM tool exports include an authorization or approval column, or mark authorized changes manually in the CSV before uploading.

CVE Lookup + EPSS Enrichment

The CVE Lookup tool now includes EPSS (Exploit Prediction Scoring System) enrichment from FIRST.org. Every CVE result is automatically enriched with its exploit probability score and percentile ranking, helping assessors prioritize vulnerabilities based on real-world exploitability — not just CVSS severity.

What EPSS Adds

FieldDescription
EPSS ScoreProbability (0–100%) that the CVE will be exploited in the wild within the next 30 days
EPSS PercentileRanking relative to all scored CVEs (e.g., 95th percentile = more exploitable than 95% of all CVEs)

How It Works

When you search for a CVE or keyword, the tool:
  1. Queries the NVD API v2 for vulnerability data (description, CVSS score, severity, references)
  2. Batch-queries the FIRST EPSS API for exploit probability scores for all returned CVEs
  3. Displays both CVSS and EPSS side-by-side in the results
EPSS scores update daily. A CVE with a high CVSS score but low EPSS score may be theoretically severe but unlikely to be exploited. Conversely, a medium-CVSS CVE with a high EPSS score demands immediate attention. Use both metrics together for prioritization.

CISA KEV Tracker

The CISA KEV (Known Exploited Vulnerabilities) Tracker lets you search and monitor CISA’s catalog of vulnerabilities that are confirmed to be actively exploited in the wild. This is critical for PCI DSS Requirements 6.3 (vulnerability management) and 11.3 (penetration testing scope).

Features

1

Dashboard Stats

The top of the panel shows three summary cards:
CardDescription
Total KEVsTotal number of vulnerabilities in the CISA KEV catalog
Published (30 days)KEVs added to the catalog in the last 30 days
Top Affected VendorVendor with the most KEV entries
2

Search

Search by CVE ID, vendor name, product name, or keyword. Results show the CVE ID (linked to NVD), CVSS score and severity badge, vendor/product, description, date added, remediation due date, and overdue status.
3

Recent KEVs

The Recent (90 days) tab shows all KEVs published in the last 90 days, sorted by date. Use this to identify newly exploited vulnerabilities that may affect in-scope systems.
4

Ransomware Filter

The Ransomware tab filters to KEVs that are known to be used in ransomware campaigns — a high-priority subset for PCI DSS assessments.
5

Top Vendors

The Top Vendors tab shows a bar chart of the most affected vendors in the KEV catalog, helping identify vendor-specific risk concentrations.

KEV Entry Details

Each KEV entry card includes:
FieldDescription
CVE IDLinked to NVD detail page
CVSS ScoreSeverity badge (Critical/High/Medium/Low)
Vendor / ProductAffected software
Date AddedWhen CISA added it to the KEV catalog
Due DateCISA’s required remediation deadline
OverdueOrange badge if the due date has passed
DescriptionVulnerability summary
Required ActionCISA’s recommended remediation action
KEV data is sourced from the NVD API with CISA extension fields. The catalog loads in the background on first access (approximately 1,500+ entries paginated from NVD). Stats and search results may take a moment to populate on cold start.

Secret Scanner

The Secret Scanner uses Gitleaks to scan public Git repositories for accidentally committed secrets — API keys, tokens, passwords, private keys, and other sensitive credentials. This supports PCI DSS Requirements 6.2 (secure development) and 6.3 (vulnerability management).

Running a Scan

1

Enter Repository URL

Paste a public Git repository URL (e.g., https://github.com/org/repo). The repository must be publicly accessible — private repositories are not currently supported.
2

Start Scan

Click Start Scan. The scanner clones the repository and runs Gitleaks against the full commit history. Scan time depends on repository size.
3

Review Results

The results show summary cards:
CardDescription
Total FindingsNumber of leaked secrets detected
CriticalHigh-entropy secrets (API keys, private keys)
HighPasswords, tokens, and other credentials
Scan DurationTime taken to complete the scan
Below the summary, findings are grouped by rule (e.g., “aws-access-key-id”, “generic-api-key”, “private-key”). Expand each finding to see:
  • File path and line number where the secret was found
  • Commit hash that introduced the secret
  • Author and date of the commit
  • Masked secret (partially redacted for safety)
  • Rule ID identifying the type of secret
The Secret Scanner only works with public repositories. If you receive a “Failed to clone repository” error, verify the URL is correct and the repository is publicly accessible. Private repository scanning requires authentication, which is not yet supported.
Gitleaks scans the entire Git history, not just the current branch. A secret that was committed and later deleted will still be detected — because it remains in the Git history and could be recovered by an attacker.

Credential Leak Monitor

The Credential Leak Monitor checks domains and company names against the Have I Been Pwned breach database. This helps assessors evaluate whether the assessed entity’s credentials or user data have appeared in known data breaches — relevant to PCI DSS Requirements 8.3 (password security) and 8.6 (account management).

Features

1

Domain Breach Check

Enter a domain name (e.g., example.com) to check if it appears in any known data breaches. Results show all matching breaches with:
FieldDescription
Breach NameName of the breached service
Breach DateWhen the breach occurred
Accounts AffectedNumber of compromised accounts
Data ClassesTypes of data exposed (emails, passwords, phone numbers, etc.)
VerifiedWhether the breach has been verified by HIBP
2

Company Search

Search by company or service name to find related breaches. This uses fuzzy matching against breach titles, names, and domains.
3

Breach Database Search

Search the full HIBP breach catalog by keyword. Browse all known breaches or filter by data type (e.g., “passwords”, “credit cards”).
4

Recent Breaches

View breaches added to the HIBP database in the last 90 days, sorted by date. Use this to identify recent incidents that may affect the assessed entity.
5

Breach Statistics

The stats overview shows:
  • Total breaches in the HIBP catalog
  • Total compromised records across all breaches
  • Recent breaches (last 90 days)
  • Top data classes (most commonly exposed data types)
  • Largest breaches (by account count)
The Credential Leak Monitor uses HIBP’s free public APIs. The breach catalog and password check (k-anonymity model) are available without an API key. Domain-specific email breach lookups require a paid HIBP API key (set HIBP_API_KEY in the environment).

Threat Briefing

The Threat Briefing aggregates real-time threat intelligence from four public sources into a single consolidated view. It provides assessors with an up-to-date picture of the current threat landscape — useful for contextualizing PCI DSS assessment findings and prioritizing remediation.

Intelligence Sources

SourceDataAPI
NVDRecent critical CVEs (CVSS 9.0+)NVD API v2
CISA KEVRecently added known exploited vulnerabilitiesNVD API with hasKev filter
FIRST EPSSTop 10 most exploitable CVEs by probabilityFIRST EPSS API
HIBPRecent verified data breaches (last 90 days)Have I Been Pwned API

Generating a Briefing

1

Select Time Range

Choose a time range: 7 days, 14 days, 30 days (default), 60 days, or 90 days. This controls how far back NVD and KEV queries look.
2

Generate

Click Generate Briefing. The system queries all four sources in parallel and produces a consolidated report.
3

Review Summary

The top of the briefing shows five summary cards:
CardDescription
Critical CVEsNumber of critical-severity CVEs published in the selected period
New KEVsNewly added CISA Known Exploited Vulnerabilities
Recent BreachesVerified data breaches from the last 90 days
Top EPSSHighest exploit probability score among current top CVEs
Total ItemsTotal deduplicated threat items across all sources
4

Review Threat Items

Each threat item shows:
  • Source icon (NVD, KEV, EPSS, or Breach)
  • Severity badge (Critical, High, Medium, Info)
  • Title with key metric (CVSS score, EPSS percentage, or account count)
  • Date published or added
  • Tags (CVE, KEV, Exploited, EPSS, Breach, data classes)
  • Link to the original source (NVD detail page or HIBP)
Items are sorted by date (newest first), then by severity. Duplicates across sources (e.g., a CVE that appears in both NVD and KEV results) are automatically deduplicated, with KEV entries taking priority.
Use the Threat Briefing at the start of an assessment to understand the current threat landscape. Cross-reference high-priority items with the entity’s technology stack to identify relevant risks, then use the CVE Lookup and CISA KEV Tracker for deeper investigation.

Remediation Dashboard

The Remediation Dashboard provides a unified view of all findings from all security tools. It does not create new data — it aggregates and displays findings that already exist in the individual tool results.

What It Shows

The dashboard is organized into five sections: Summary Cards
CardDescription
Total FindingsCount of all findings across all tools
Critical + High OpenCount of open findings with Critical or High severity (highlighted in red)
Remediation RatePercentage of findings that are Fixed or Accepted Risk. Color-coded: green (80%+), yellow (50–79%), red (below 50%)
Tools with FindingsCount of tools that have at least one finding (e.g., 5/5)
By Severity Horizontal bar chart showing finding counts for Critical, High, Medium, and Low severities. Each bar is color-coded and proportional to the total finding count. By Status Horizontal bar chart showing finding counts by remediation status: Open, In Progress, Fixed, and Accepted Risk. By Tool Breakdown showing which security tool contributed which findings, with the tool icon and count badge. By PCI Requirement Table mapping findings to their PCI DSS requirements (4.2.1, 6.3, 11.3.2, 11.4, 2.2.5) with requirement label, total finding count, and critical finding count. Top Open Findings A prioritized list of the 20 most severe open findings across all tools. Each row shows the severity badge, tool icon, finding title, target (host/domain), and mapped PCI requirement.
The Remediation Dashboard updates in real-time as you change finding statuses in the individual tool tabs. Switch between tools and the dashboard to track remediation progress as findings are addressed.

The security tools are designed to be used in a logical sequence during a PCI DSS assessment:
1

Port & Service Scanner

Scan or import nmap results for all in-scope network segments. This inventories open ports and flags risky services for Req 1.2.1.
2

SSL/TLS Checker

Check SSL/TLS certificates for all in-scope domains. This establishes the cryptographic baseline and addresses Req 4.2.1.
3

Headers & DNS

Check HTTP security headers and DNS records on the same domains. This identifies server hardening gaps for Req 2.2.5.
4

CVE Lookup

Search for known vulnerabilities in any software identified during the assessment — web servers, databases, libraries, and frameworks. This addresses Req 6.3.
5

Patch Management

Upload the patch management report (WSUS, SCCM, or Qualys) to verify that critical and high patches are applied within 30 days. This addresses Req 6.3.3.
6

Log Audit Validator

Upload representative log samples from each system type in scope. Verify that all six required audit trail fields are present per Req 10.2.
7

Access Review

Upload a user account export from AD, Azure AD, or AWS IAM. Identify inactive accounts, shared accounts, excessive privileges, and missing MFA per Reqs 7.2 and 8.6.
8

ASV Scan Import

Upload the quarterly ASV scan report from the organization’s scanning vendor. This provides the external vulnerability scan evidence for Req 11.3.2.
9

Pen Test Parser

Upload the most recent penetration test report. This provides testing evidence for Req 11.4 (external, internal, and segmentation testing).
10

Payment Page Script Monitor

Scan payment page URLs or import a script inventory. This verifies script integrity controls for the new Req 11.6.1.
11

Firewall Rule Analyzer

Upload firewall rule exports to verify traffic rules are properly scoped with no any-any rules or deprecated protocols. This addresses Reqs 1.2.5 and 1.3.
12

Password Policy Analyzer

Upload a password policy export from AD, Azure AD, or AWS IAM to verify minimum length, complexity, history, and lockout settings per Req 8.3.
13

Anti-Malware Deployment Checker

Upload the endpoint protection report to verify agent deployment, signature freshness, and real-time protection status per Reqs 5.2 and 5.3.
14

FIM Report Parser

Upload the file integrity monitoring report to verify change detection is deployed and alerting on unauthorized modifications per Req 11.5.
15

CISA KEV Tracker

Search the CISA KEV catalog for any CVEs relevant to in-scope systems. Check the ransomware tab and recent additions. This supplements Reqs 6.3 and 11.3 with active exploitation context.
16

Secret Scanner

Scan any public repositories associated with the assessed entity for leaked secrets and credentials. This supports Reqs 6.2 and 6.3.
17

Credential Leak Monitor

Check the entity’s domain and company name against the HIBP breach database. Identify any historical credential exposures relevant to Reqs 8.3 and 8.6.
18

Threat Briefing

Generate a consolidated threat briefing to understand the current threat landscape. Use this to contextualize findings and prioritize remediation.
19

Remediation Dashboard

Review the aggregated findings across all tools. Prioritize Critical and High findings, track remediation progress, and verify that the remediation rate is acceptable before finalizing the assessment.

Auto-Fill Summary

Each tool can auto-fill its corresponding PCI DSS requirement with a structured justification:
ToolTarget RequirementJustification Includes
Port & Service Scanner1.2.1Hosts, open ports, risky services, PCI issues, compliance status
SSL/TLS Checker4.2.1Domain, grade, protocol version, PCI issues, compliance status
CVE Lookup + EPSS6.3Product, version, CVE count, severity breakdown, CVSS scores, EPSS exploit probability
Patch Management6.3.3Vendor, total patches, missing critical/high, overdue count, compliance status
Log Audit Validator10.2.1Log format, total entries, field completeness (6 fields), confidence levels
Access Review7.2.1, 7.2.5, 8.6.1Source, total accounts, inactive/shared/admin/no-MFA counts, compliance status
ASV Scan Import11.3.2Vendor, scan date, host count, finding count, PASS/FAIL
Pen Test Parser11.4.2 / 11.4.3 / 11.4.5Tool, tester, date, test type, finding count, severity breakdown
Headers & DNS2.2.5Domain, grade, headers passed/total, DNS records, PCI status
Payment Page Script Monitor11.6.1URL, total scripts, third-party count, SRI coverage, CSP presence, compliance status
Firewall Rule Analyzer1.2.5, 1.3.1, 1.3.2Format, total rules, allow/deny counts, flagged violations, compliance status
Password Policy Analyzer8.3.6, 8.3.7, 8.3.9Source, min length, complexity, history, max age, lockout settings, checks passed/failed
Anti-Malware Deployment Checker5.2.1, 5.3.1, 5.3.2Vendor, total endpoints, agents down, signatures outdated, real-time disabled, compliance status
FIM Report Parser11.5.1, 11.5.2Tool, total changes, critical changes, unauthorized count, files monitored, compliance status
Auto-fill generates draft text based on tool results. The assessor should review and supplement the auto-filled content with additional context, observations, and professional judgment before finalizing the assessment answer.