Heap Overflow in Fastapi
How Heap Overflow Manifests in Fastapi
Heap overflow vulnerabilities in Fastapi applications typically emerge through improper handling of dynamic data structures and unbounded input processing. Fastapi's asynchronous nature and high-performance design can inadvertently create conditions where attackers can exhaust memory resources through carefully crafted requests.
The most common heap overflow pattern in Fastapi involves unbounded list or dictionary operations. Consider this vulnerable endpoint:
@app.post("/process-items/") async def process_items(items: List[Dict]):
results = []
for item in items:
results.append(process_item(item))
return resultsAn attacker can send a request with millions of items, causing the server to allocate massive amounts of memory for the results list. Fastapi's Pydantic models don't inherently limit collection sizes, making this a silent vulnerability.
Another Fastapi-specific heap overflow scenario occurs with file upload endpoints. Fastapi's UploadFile interface allows streaming, but developers often implement naive file size limits:
@app.post("/upload/") async def upload_file(file: UploadFile = File(...)):
content = await file.read() # No size limit - heap overflow risk
process_content(content)
return {"status": "success"}Without proper size constraints, an attacker can upload files of arbitrary size, consuming all available heap memory. Fastapi's default configuration doesn't enforce file size limits, leaving this as a common developer oversight.
JSON parsing attacks represent another heap overflow vector. Fastapi uses Pydantic for request validation, but deeply nested or excessively large JSON structures can trigger exponential parsing behavior:
@app.post("/nested-data/") async def nested_data(data: NestedModel):
return data.dict()If NestedModel contains recursive or deeply nested structures without size limits, attackers can craft payloads that cause exponential memory allocation during parsing.
Fastapi's dependency injection system can also be exploited for heap overflow attacks. Consider this vulnerable pattern:
@app.post("/process-with-deps/") async def process_with_deps(
dep1: Dependency = Depends(),
dep2: Dependency = Depends(),
data: List[Dict] = Body(...)
):
# Dependencies might create large objects without limits
return process_data(data)Malicious dependencies can create large objects or maintain state that grows with each request, eventually exhausting heap memory.
Fastapi-Specific Detection
Detecting heap overflow vulnerabilities in Fastapi requires both static analysis and runtime monitoring. The middleBrick API security scanner provides Fastapi-specific detection capabilities that identify these memory exhaustion patterns.
middleBrick's black-box scanning approach tests Fastapi endpoints without requiring source code access. For heap overflow detection, it employs several techniques:
Input Size Testing: middleBrick systematically tests endpoint parameters with progressively larger inputs, monitoring for memory consumption patterns. For Fastapi's Pydantic-based validation, it attempts to bypass size limits by crafting oversized JSON arrays, strings, and nested objects.
File Upload Analysis: The scanner tests file upload endpoints by sending files of increasing size, identifying endpoints that don't enforce proper size limits. It specifically looks for Fastapi's UploadFile usage patterns that might allow unlimited file reads.
Rate-Limited Heap Exhaustion: middleBrick tests whether Fastapi applications properly handle concurrent requests that could trigger heap overflow through resource exhaustion. It sends multiple large requests simultaneously to identify race conditions in memory allocation.
OpenAPI Spec Analysis: When Fastapi applications expose their OpenAPI specification, middleBrick cross-references the documented schemas with actual runtime behavior. It identifies discrepancies where the spec suggests size limits but the implementation doesn't enforce them.
Sample middleBrick CLI usage for Fastapi heap overflow testing:
middlebrick scan https://api.example.com --path /process-items \
--method POST \
--payload '{"items": [{"id": 1, "data": "a"}]}' \
--max-size 1000000 \
--heap-testThis command tests the /process-items endpoint with progressively larger payloads up to 1MB, specifically looking for heap overflow vulnerabilities.
middleBrick's LLM security module also detects heap overflow risks in AI-powered Fastapi endpoints, testing for excessive memory consumption in model inference endpoints that might process large context windows or batch requests.
Fastapi-Specific Remediation
Fastapi provides several native mechanisms to prevent heap overflow vulnerabilities. The key is implementing proper input validation and resource limits at the framework level.
Size-Limited Pydantic Models: Create custom Pydantic types that enforce size limits:
from pydantic import BaseModel, constr, conlist
from typing import List, Dict
class BoundedString(str):
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
if len(v) > 10000: # 10KB limit
raise ValueError('String too large')
return v
class BoundedList(BaseModel):
__root__: List[Dict]
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
if len(v) > 1000: # Max 1000 items
raise ValueError('List too large')
return v.dict()Apply these models to your Fastapi endpoints:
@app.post("/safe-items/")
async def safe_items(items: BoundedList = Body(...)):
return process_items(items.__root__)File Upload Protection: Implement proper file size limits in Fastapi:
from fastapi import UploadFile, File, HTTPException
MAX_FILE_SIZE = 10 * 1024 * 1024 # 10MB
async def safe_file_read(file: UploadFile):
content = await file.read(MAX_FILE_SIZE)
if await file.read(1):
raise HTTPException(status_code=413, detail="File too large")
return content
@app.post("/upload-safe/")
async def upload_safe(file: UploadFile = File(...)):
content = await safe_file_read(file)
return process_content(content)Streaming for Large Data: Use Fastapi's streaming capabilities instead of loading everything into memory:
from fastapi import StreamingResponse
@app.post("/process-streaming/")
async def process_streaming(file: UploadFile = File(...)):
async def file_generator():
chunk_size = 64 * 1024 # 64KB chunks
while chunk := await file.read(chunk_size):
processed = process_chunk(chunk)
yield processed
return StreamingResponse(file_generator())Global Size Limits: Configure Fastapi's request size limits:
from fastapi import FastAPI, Request
from fastapi.middleware import Middleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
app = FastAPI(
middleware=[
Middleware(
TrustedHostMiddleware,
allowed_hosts=['*']
)
]
)
@app.middleware("http")
async def size_limit_middleware(request: Request, call_next):
if int(request.headers.get('content-length', 0)) > 10 * 1024 * 1024: # 10MB limit
return JSONResponse(
status_code=413,
content={"detail": "Request too large"}
)
return await call_next(request)Monitoring and Alerting: Implement memory usage monitoring in Fastapi:
import psutil
from fastapi import BackgroundTasks
@app.middleware("http")
async def memory_monitor_middleware(request: Request, call_next):
process = psutil.Process()
initial_mem = process.memory_info().rss
response = await call_next(request)
final_mem = process.memory_info().rss
if final_mem - initial_mem > 50 * 1024 * 1024: # 50MB increase
log.warning(f"Potential memory leak: {final_mem - initial_mem} bytes")
return response