Skip to content

Understanding Your Score

Every scan produces a security score from 0 to 100. Higher is more secure. The score reflects how your API performs across all 12 security categories, weighted by importance.

The score is not a percentage of “tests passed.” It’s a weighted risk assessment. A single critical vulnerability (like unauthenticated admin access) will drop your score far more than several informational observations.

Your score maps to a letter grade for quick assessment:

GradeMeaningTypical action
AExcellent, minimal risk, well-securedMaintain with regular scanning
BGood, minor issues, low riskAddress findings when convenient
CFair, moderate issues presentSchedule fixes in your next sprint
DPoor, significant vulnerabilitiesPrioritize remediation immediately
FCritical, serious security risksStop and fix before deploying further

Most APIs score between 50–80 on their first scan. Don’t be alarmed by a C or D. It’s common for APIs that haven’t been through dedicated security testing. The goal is to identify issues and improve over time.

Benchmarks by maturity:

  • New/untested APIs: 40–60 is typical
  • Production APIs with basic security: 65–80
  • Security-hardened APIs: 85+
  • Perfect score (100): rare, and not necessarily the goal due to diminishing returns on minor findings

Your score is composed of results across 12 security categories, each covering a different aspect of API security:

CategoryWhat it covers
AuthenticationAuth enforcement, JWT security, security headers
BOLA / IDORObject-level access control
BFLA / Privilege EscalationFunction-level access control
Property AuthorizationOver-exposed fields, mass assignment
Input ValidationCORS, content types, unsafe methods
Rate LimitingThrottling, pagination, response size
Data ExposurePII, credentials, error leakage
EncryptionHTTPS, HSTS, cookie security
SSRFURL injection, internal IP leakage
Inventory ManagementVersioning, legacy paths, fingerprinting
Unsafe ConsumptionThird-party dependencies, webhooks
LLM / AI SecurityPrompt injection, jailbreaks, data leakage

Categories are weighted based on their relative risk impact. Authentication and data exposure carry more weight than inventory management, for example.

Each finding is assigned a severity level based on exploitability and impact:

Actively exploitable vulnerabilities with immediate risk. These represent real attack paths that could lead to data breaches, unauthorized access, or system compromise.

Examples: unauthenticated admin access, system prompt leakage in AI endpoints, CORS wildcard with credentials, exposed database credentials in responses.

Significant security weaknesses that attackers can leverage. These require some conditions to exploit but represent serious risk.

Examples: authentication bypass via HTTP method switching, PII exposed in responses (emails, SSNs), IDOR with confirmed data access, missing encryption redirect.

Issues that weaken your security posture. Not immediately exploitable on their own, but they reduce defense depth.

Examples: missing rate limiting, weak HSTS configuration, unsafe HTTP methods exposed, overly permissive CORS.

Minor issues and hardening recommendations. Good security hygiene but lower risk.

Examples: missing optional security headers, server technology fingerprinting, minor configuration improvements.

Observations that may or may not be relevant depending on your context. Not vulnerabilities, but worth reviewing.

Examples: detected API framework, response metadata, spec mismatches that may be intentional.

Higher severity findings have a dramatically larger impact:

  • A single critical finding can drop your score by a large margin
  • High findings have significant impact
  • Medium findings have moderate impact
  • Low and info findings have minimal to no score impact

This means you get the biggest score improvements by fixing critical and high findings first. Chasing low/info findings has diminishing returns.

  1. Fix critical findings. Each one you resolve produces the largest score jump.
  2. Address authentication issues. Auth problems are heavily weighted because they affect everything downstream.
  3. Stop exposing sensitive data. Remove PII, credentials, and error details from responses.
  1. Provide your OpenAPI spec. This enables deeper analysis and catches spec-vs-runtime mismatches. Many teams see 2–5 additional findings from spec analysis alone.
  2. Add rate limiting. Even basic throttling (429 responses) improves your score.
  3. Enforce HTTPS properly. HSTS with a long max-age, secure cookie flags, no mixed content.
  1. Set the right context. Use financial, medical, public, or internal so the engine prioritizes checks relevant to your API type.
  2. Scan on every deploy to catch regressions before they reach production.
  3. Use CI/CD integration. The GitHub Action can block deploys when your score drops below a threshold.
  4. Review info findings. They’re not vulnerabilities, but they sometimes reveal unexpected behavior worth investigating.

Every scan is stored in your dashboard history. Over time, you’ll see your score trend — which is more valuable than any single scan. Look for:

  • Score stability: is your score holding steady or regressing with each deploy?
  • New findings: are new issues appearing, or are you only seeing known ones?
  • Category improvements: which areas are getting better, and which are stuck?

For teams on the Pro plan, continuous monitoring runs scans automatically and alerts you when your score changes.