Spring4shell in Fastapi
How Spring4shell Manifests in FastAPI — Specific Attack Patterns
The Spring4shell vulnerability (CVE-2022-22965) is a remote code execution flaw in Spring Framework where an attacker can inject malicious expressions via request parameters that get evaluated by Spring's expression language (SpEL). While FastAPI does not use SpEL, analogous risks exist through unsafe handling of user-supplied data that influences code execution paths.
In FastAPI, the primary manifestation occurs when developers inadvertently pass user-controlled data to functions that evaluate code or perform dynamic operations. Common patterns include:
- Unsafe Deserialization: Using
json.loads()orpickle.loads()on raw request bodies without schema validation, allowing attackers to craft payloads that execute arbitrary code upon parsing. - Expression Injection in Template Engines: If FastAPI apps use Jinja2 templates (via
Jinja2Templates) and directly inject user input into templates without sanitization, attackers can exploit template injection to execute Python code on the server. - Dynamic Module/Function Import: Using
importlib.import_module()oreval()/exec()with request parameters to dynamically load modules or execute code, e.g., based on a?action=query parameter. - Dependency Injection Abuse: FastAPI's dependency system (
Depends()) can become risky if dependencies themselves use user input to resolve other dependencies or execute logic unsafely.
For example, a vulnerable FastAPI endpoint might look like this:
from fastapi import FastAPI, Request
import json
import importlib
app = FastAPI()
@app.post("/process")
async def process(request: Request):
data = await request.json()
# DANGER: Directly using user input to import and call a module
module_name = data.get("module", "default")
func_name = data.get("function", "run")
module = importlib.import_module(f"app.modules.{module_name}")
func = getattr(module, func_name)
return func(data.get("payload"))Here, an attacker could send {"module": "os", "function": "system", "payload": "ls"} to execute arbitrary system commands, mirroring the code execution impact of Spring4shell.
FastAPI-Specific Detection — Identifying the Issue
Detecting these patterns requires analyzing both the API specification (OpenAPI) and runtime behavior. middleBrick's Input Validation and BOLA/IDOR checks are designed to flag endpoints that accept unstructured input and pass it to dangerous operations.
Specifically, middleBrick tests for:
- Unsafe Deserialization: By sending crafted JSON payloads with nested objects or arrays that trigger known exploit patterns (e.g.,
__reduce__in pickle), the scanner observes whether the endpoint behaves unexpectedly or leaks errors indicating deserialization. - Expression/Code Injection: The scanner probes endpoints with payloads like
{{ config.__class__.__init__.__globals__['os'].system('id') }}in JSON fields or query parameters, monitoring response content for command execution artifacts (e.g.,uid=). - Dynamic Import Abuse: By attempting to import common modules (
os,sys,subprocess) via parameterized paths, middleBrick checks if the endpoint returns system-level information or executes commands.
To scan a FastAPI application with middleBrick, use the CLI tool after deploying your API to a test environment:
# Install the CLI
npm install -g middlebrick
# Scan your FastAPI endpoint (ensure it's publicly accessible or use a tunnel)
middlebrick scan https://your-fastapi-app.com/openapi.jsonThe scanner will analyze the OpenAPI spec to identify all endpoints, then send sequential probes. If vulnerable patterns exist, the report will highlight them under the Input Validation category with a high severity, including the exact request that triggered the issue and a remediation description like "Avoid using user input in dynamic imports or code evaluation."
For CI/CD integration, the GitHub Action can automatically scan staging APIs on every pull request:
# In .github/workflows/api-security.yml
- name: Run middleBrick scan
uses: middlebrick/github-action@v1
with:
api_url: ${{ secrets.STAGING_API_URL }}
fail_below_score: 80This fails the build if the scan detects critical input validation issues.
FastAPI-Specific Remediation — Code Fixes Using Native Features
Remediation in FastAPI centers on leveraging its built-in request validation via Pydantic, avoiding dynamic code execution, and using safe parsing libraries. Here are concrete fixes:
1. Enforce Strict Schema Validation
Replace raw request.json() with Pydantic models. This ensures incoming data conforms to expected types and structures, blocking unexpected fields that could be used in attacks.
from pydantic import BaseModel
from typing import Optional
class ProcessRequest(BaseModel):
module: str
function: str
payload: dict
@app.post("/process")
async def process(request: ProcessRequest):
# DANGER: Still risky if module/function are used dynamically
...However, validation alone isn't enough—the model above still allows arbitrary strings for module and function. Remediation requires eliminating dynamic imports entirely.
2. Whitelist Allowed Operations
Map user input to a predefined set of safe functions. Never use user input directly in importlib or getattr.
ALLOWED_MODULES = {
"data_processor": app.modules.data_processor,
"report_generator": app.modules.report_generator
}
@app.post("/process")
async def process(request: ProcessRequest):
module = ALLOWED_MODULES.get(request.module)
if not module:
raise HTTPException(status_code=400, detail="Invalid module")
func = getattr(module, request.function, None)
if not func or not callable(func):
raise HTTPException(status_code=400, detail="Invalid function")
return func(request.payload)3. Avoid Template Injection
If using Jinja2, never pass raw user input to templates. Use autoescaping and render only static templates with pre-validated context.
from fastapi.templating import Jinja2Templates
templates = Jinja2Templates(directory="templates")
@app.get("/report/{report_id}")
async def get_report(request: Request, report_id: str):
# Safe: report_id is part of the URL path, validated by path converter
report_data = get_report_from_db(report_id) # fetch from DB
return templates.TemplateResponse(
"report.html",
{"request": request, "report": report_data}
)
# NEVER do: templates.TemplateResponse("report.html", {"request": request, "user_input": request.query_params.get("template")})4. Use Safe Parsers
For configurations or data files, avoid pickle and yaml.load() without safe loaders. Prefer JSON (via Pydantic) or yaml.safe_load().
import yaml
# UNSAFE:
# config = yaml.load(user_supplied_yaml, Loader=yaml.FullLoader)
# SAFE:
config = yaml.safe_load(user_supplied_yaml)
# Even better: Validate config against a Pydantic model
class ConfigModel(BaseModel):
max_items: int
allowed_types: list[str]
validated_config = ConfigModel(**config)After applying these fixes, re-scan with middleBrick to verify the Input Validation score improves. The Pro plan's continuous monitoring can alert you if a future commit reintroduces risky patterns.
Frequently Asked Questions
Is FastAPI inherently vulnerable to Spring4shell-style attacks?
eval(), unsafe deserialization, or dynamic imports with user input—which are language-agnostic risks. Proper use of FastAPI's validation features mitigates these.How does middleBrick detect expression injection in FastAPI without credentials?
{{ 1+1 }} or {{ ''.__class__.__mro__[1].__subclasses__() }} in JSON fields and analyzes responses for signs of evaluation (e.g., 2 or class listings). If the endpoint reflects processed input or returns errors exposing Python internals, it flags an Input Validation issue. This requires no credentials—only the API's base URL and OpenAPI spec.