Understanding Your Score
The 0–100 Score
Section titled “The 0–100 Score”Every scan produces a security score from 0 to 100. Higher is more secure. The score reflects how your API performs across all 12 security categories, weighted by importance.
The score is not a percentage of “tests passed.” It’s a weighted risk assessment. A single critical vulnerability (like unauthenticated admin access) will drop your score far more than several informational observations.
Letter Grades
Section titled “Letter Grades”Your score maps to a letter grade for quick assessment:
| Grade | Meaning | Typical action |
|---|---|---|
| A | Excellent, minimal risk, well-secured | Maintain with regular scanning |
| B | Good, minor issues, low risk | Address findings when convenient |
| C | Fair, moderate issues present | Schedule fixes in your next sprint |
| D | Poor, significant vulnerabilities | Prioritize remediation immediately |
| F | Critical, serious security risks | Stop and fix before deploying further |
What’s a “good” score?
Section titled “What’s a “good” score?”Most APIs score between 50–80 on their first scan. Don’t be alarmed by a C or D. It’s common for APIs that haven’t been through dedicated security testing. The goal is to identify issues and improve over time.
Benchmarks by maturity:
- New/untested APIs: 40–60 is typical
- Production APIs with basic security: 65–80
- Security-hardened APIs: 85+
- Perfect score (100): rare, and not necessarily the goal due to diminishing returns on minor findings
Security Categories
Section titled “Security Categories”Your score is composed of results across 12 security categories, each covering a different aspect of API security:
| Category | What it covers |
|---|---|
| Authentication | Auth enforcement, JWT security, security headers |
| BOLA / IDOR | Object-level access control |
| BFLA / Privilege Escalation | Function-level access control |
| Property Authorization | Over-exposed fields, mass assignment |
| Input Validation | CORS, content types, unsafe methods |
| Rate Limiting | Throttling, pagination, response size |
| Data Exposure | PII, credentials, error leakage |
| Encryption | HTTPS, HSTS, cookie security |
| SSRF | URL injection, internal IP leakage |
| Inventory Management | Versioning, legacy paths, fingerprinting |
| Unsafe Consumption | Third-party dependencies, webhooks |
| LLM / AI Security | Prompt injection, jailbreaks, data leakage |
Categories are weighted based on their relative risk impact. Authentication and data exposure carry more weight than inventory management, for example.
Finding Severities
Section titled “Finding Severities”Each finding is assigned a severity level based on exploitability and impact:
Critical
Section titled “Critical”Actively exploitable vulnerabilities with immediate risk. These represent real attack paths that could lead to data breaches, unauthorized access, or system compromise.
Examples: unauthenticated admin access, system prompt leakage in AI endpoints, CORS wildcard with credentials, exposed database credentials in responses.
Significant security weaknesses that attackers can leverage. These require some conditions to exploit but represent serious risk.
Examples: authentication bypass via HTTP method switching, PII exposed in responses (emails, SSNs), IDOR with confirmed data access, missing encryption redirect.
Medium
Section titled “Medium”Issues that weaken your security posture. Not immediately exploitable on their own, but they reduce defense depth.
Examples: missing rate limiting, weak HSTS configuration, unsafe HTTP methods exposed, overly permissive CORS.
Minor issues and hardening recommendations. Good security hygiene but lower risk.
Examples: missing optional security headers, server technology fingerprinting, minor configuration improvements.
Observations that may or may not be relevant depending on your context. Not vulnerabilities, but worth reviewing.
Examples: detected API framework, response metadata, spec mismatches that may be intentional.
How Severity Affects Your Score
Section titled “How Severity Affects Your Score”Higher severity findings have a dramatically larger impact:
- A single critical finding can drop your score by a large margin
- High findings have significant impact
- Medium findings have moderate impact
- Low and info findings have minimal to no score impact
This means you get the biggest score improvements by fixing critical and high findings first. Chasing low/info findings has diminishing returns.
How to Improve Your Score
Section titled “How to Improve Your Score”Quick wins (biggest impact)
Section titled “Quick wins (biggest impact)”- Fix critical findings. Each one you resolve produces the largest score jump.
- Address authentication issues. Auth problems are heavily weighted because they affect everything downstream.
- Stop exposing sensitive data. Remove PII, credentials, and error details from responses.
Medium-term improvements
Section titled “Medium-term improvements”- Provide your OpenAPI spec. This enables deeper analysis and catches spec-vs-runtime mismatches. Many teams see 2–5 additional findings from spec analysis alone.
- Add rate limiting. Even basic throttling (429 responses) improves your score.
- Enforce HTTPS properly. HSTS with a long max-age, secure cookie flags, no mixed content.
Ongoing hardening
Section titled “Ongoing hardening”- Set the right context. Use
financial,medical,public, orinternalso the engine prioritizes checks relevant to your API type. - Scan on every deploy to catch regressions before they reach production.
- Use CI/CD integration. The GitHub Action can block deploys when your score drops below a threshold.
- Review info findings. They’re not vulnerabilities, but they sometimes reveal unexpected behavior worth investigating.
Tracking Progress
Section titled “Tracking Progress”Every scan is stored in your dashboard history. Over time, you’ll see your score trend — which is more valuable than any single scan. Look for:
- Score stability: is your score holding steady or regressing with each deploy?
- New findings: are new issues appearing, or are you only seeing known ones?
- Category improvements: which areas are getting better, and which are stuck?
For teams on the Pro plan, continuous monitoring runs scans automatically and alerts you when your score changes.