HIGH xss cross site scriptingfastapi

Xss Cross Site Scripting in Fastapi

How XSS Cross-Site Scripting Manifests in Fastapi

Cross-Site Scripting (XSS) in FastAPI applications typically occurs when user input is rendered in HTML responses without proper sanitization. FastAPI's asynchronous nature and automatic JSON serialization can create unique XSS scenarios that developers might overlook.

One common pattern involves FastAPI's dependency injection system. When using Depends() to inject user data into route handlers, developers might inadvertently pass unsanitized input to templates:

from fastapi import FastAPI, Depends
from fastapi.templating import Jinja2Templates
from pydantic import BaseModel

templates = Jinja2Templates(directory="templates")
app = FastAPI()

class UserData(BaseModel):
    username: str

def get_user_data():
    # In a real app, this would come from a database
    return UserData(username="")

@app.get("/profile")
async def profile_page(user: UserData = Depends(get_user_data)):
    return templates.TemplateResponse("profile.html", {"user": user})

The vulnerability here is that FastAPI's Pydantic models don't automatically sanitize HTML content. The malicious script in the username field would execute when rendered in the template.

Another FastAPI-specific scenario involves the automatic JSON response serialization. When returning Pydantic models that contain HTML content, FastAPI converts them to JSON without escaping:

class Comment(BaseModel):
    content: str

@app.post("/comments")
async def create_comment(comment: Comment):
    # No sanitization of HTML content
    return comment

If a client requests this endpoint and renders the response content without proper escaping, XSS can occur. This is particularly dangerous in single-page applications that consume FastAPI APIs.

FastAPI's WebSocket support introduces additional XSS vectors. Malicious users can inject scripts through WebSocket messages that are then broadcast to other users:

from fastapi import WebSocket

@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()
    while True:
        data = await websocket.receive_text()
        # Broadcast unsanitized data to all connected clients
        await websocket.send_text(f"User said: {data}")

Stateful dependencies in FastAPI can also lead to XSS when user input is stored in shared state without validation:

from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

@app.middleware("http")
async def add_custom_header(request: Request, call_next):
    response = await call_next(request)
    # Unsanitized user data from request headers
    response.headers["X-Custom-Header"] = request.headers.get("X-User-Data", "")
    return response

FastAPI's background tasks feature can create delayed XSS vulnerabilities where user input is processed asynchronously and later rendered without proper sanitization.

FastAPI-Specific Detection

Detecting XSS in FastAPI applications requires examining both the code structure and runtime behavior. middleBrick's black-box scanning approach is particularly effective for FastAPI APIs because it tests the actual attack surface without requiring source code access.

When scanning a FastAPI endpoint, middleBrick automatically tests for XSS by injecting payloads into all string parameters and examining responses. For FastAPI's JSON endpoints, it looks for reflected XSS where user input appears in API responses:

{
  "username": "<script>alert('xss')</script>",
  "message": "test"
}

The scanner checks if the response contains the unescaped script tags, which would indicate a vulnerability.

For FastAPI applications using Jinja2 templates, middleBrick examines the template rendering logic by testing parameter binding. It looks for patterns where user input flows directly into template contexts without sanitization.

The OpenAPI spec analysis feature is particularly valuable for FastAPI apps. Since FastAPI auto-generates OpenAPI specs from route definitions, middleBrick can correlate the documented parameters with the actual runtime behavior:

from fastapi import FastAPI, Query

app = FastAPI()

@app.get("/search")
async def search(q: str = Query(...)):
    # Search results rendered in HTML
    return {"results": f"You searched for: {q}"}

middleBrick would detect that the q parameter is reflected in the response without escaping, even though the code appears benign.

For FastAPI WebSocket endpoints, the scanner tests message handling by establishing WebSocket connections and sending malicious payloads to check if they're reflected back to other clients or stored for later rendering.

middleBrick's LLM/AI security checks are especially relevant for FastAPI applications using AI features. If your FastAPI app has endpoints that process or generate LLM responses, the scanner tests for prompt injection and output sanitization issues that could lead to XSS in AI-generated content.

The continuous monitoring feature in the Pro plan is ideal for FastAPI applications in production. You can set up scheduled scans that automatically test your API endpoints for new XSS vulnerabilities as your codebase evolves.

FastAPI-Specific Remediation

FastAPI provides several native approaches to prevent XSS vulnerabilities. The most effective strategy combines input validation, output encoding, and secure template rendering.

For Pydantic models that accept user input, use the constr validator with regex patterns to restrict allowed characters:

from pydantic import BaseModel, constr
from typing import Any

class SafeUserInput(BaseModel):
    username: constr(regex=r'^[a-zA-Z0-9_]{3,20}$')
    comment: constr(max_length=500)

    @validator('comment')
    def no_html(cls, v: Any) -> str:
        if '<' in v or '>' in v:
            raise ValueError('HTML tags are not allowed')
        return v

For template rendering with Jinja2 in FastAPI, enable autoescaping and use the built-in filters:

from fastapi import FastAPI, Request
from fastapi.templating import Jinja2Templates

app = FastAPI()

templates = Jinja2Templates(
    directory="templates",
    autoescape=True  # Automatically escape HTML in templates
)

@app.get("/user/{user_id}")
async def user_profile(request: Request, user_id: str):
    user_data = await get_user_from_db(user_id)
    return templates.TemplateResponse("profile.html", {
        "request": request,
        "user": user_data
    })

In your Jinja2 template, always use the |e filter for user-generated content:

<p>Welcome, {{ user.username|e }}</p>
<div class="bio">{{ user.bio|e }}</div>

For JSON endpoints, implement output sanitization using Python's html module:

import html
from fastapi import FastAPI

app = FastAPI()

@app.post("/comments")
async def create_comment(comment: str):
    sanitized = html.escape(comment)
    return {"comment": sanitized}

For FastAPI applications that serve both API and HTML content, use middleware to enforce content security policies:

from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

@app.middleware("http")
async def csp_middleware(request: Request, call_next):
    response = await call_next(request)
    response.headers["Content-Security-Policy"] = "default-src 'self'; script-src 'self'"
    return response

When using FastAPI's WebSocket support, implement message sanitization before broadcasting:

import html
from fastapi import WebSocket
from typing import List

class ConnectionManager:
    def __init__(self):
        self.active_connections: List[WebSocket] = []
    
    async def connect(self, websocket: WebSocket):
        await websocket.accept()
        self.active_connections.append(websocket)
    
    def disconnect(self, websocket: WebSocket):
        self.active_connections.remove(websocket)
    
    async def send_message(self, message: str, websocket: WebSocket):
        # Sanitize message before broadcasting
        sanitized = html.escape(message)
        for connection in self.active_connections:
            await connection.send_text(sanitized)

For background tasks that process user input, always sanitize data before storing or rendering:

from fastapi import BackgroundTasks
import html

async def process_comment_background(comment: str):
    sanitized = html.escape(comment)
    await save_to_database(sanitized)

@app.post("/comments")
async def create_comment(
    comment: str,
    background_tasks: BackgroundTasks
):
    background_tasks.add_task(process_comment_background, comment)
    return {"status": "processing"}

Related CWEs: inputValidation

CWE IDNameSeverity
CWE-20Improper Input Validation HIGH
CWE-22Path Traversal HIGH
CWE-74Injection CRITICAL
CWE-77Command Injection CRITICAL
CWE-78OS Command Injection CRITICAL
CWE-79Cross-site Scripting (XSS) HIGH
CWE-89SQL Injection CRITICAL
CWE-90LDAP Injection HIGH
CWE-91XML Injection HIGH
CWE-94Code Injection CRITICAL

Frequently Asked Questions

How does middleBrick detect XSS vulnerabilities in FastAPI applications?
middleBrick performs black-box scanning by sending malicious payloads to your FastAPI endpoints and analyzing responses. It tests all string parameters with XSS payloads, examines JSON responses for reflected content, and analyzes template rendering patterns. The scanner also examines your OpenAPI spec to understand parameter flow and correlates it with runtime behavior. For FastAPI apps using AI features, it includes specialized LLM security checks for prompt injection and output sanitization.
Can middleBrick scan my FastAPI application if it's behind authentication?
Yes, middleBrick can scan authenticated FastAPI endpoints. You can provide authentication credentials (API keys, OAuth tokens, or basic auth) when submitting your API for scanning. The scanner will use these credentials to access protected routes and test the full authenticated attack surface. This is particularly important for FastAPI applications where XSS vulnerabilities might only appear after authentication.