Prompt Injection in Django with Mutual Tls
Prompt Injection in Django with Mutual TLS — how this specific combination creates or exposes the vulnerability
Prompt injection targets applications that integrate with LLMs and rely on user-influenced inputs to shape model behavior. In a Django application that calls an LLM endpoint, prompt injection can occur when untrusted data such as query parameters, headers, or request bodies is incorporated into the prompt sent to the model. When mutual TLS is used for client authentication, the client certificate becomes part of the request context. If the application uses attributes from the client certificate (for example, the subject or serial number) in constructing prompts or routing logic, an attacker who compromises or spoofes a certificate can inject crafted content into the prompt surface.
Mutual TLS ensures the client is known to the server, but it does not prevent the server from mishandling that identity information. For example, using the certificate subject in a role-based prompt such as {{ system_prompt }} User: {{ cert_subject }} can allow an attacker with a valid certificate to change the assumed role or context, effectively bypassing intended guardrails. In black-box scanning, middleBrick’s LLM/AI Security checks detect whether identity-derived inputs influence LLM behavior by running sequential probes including system prompt extraction and instruction override. These probes can surface prompt injection risks even when TLS is in place, because the vulnerability lies in how identity data is used rather than in the transport security itself.
Additionally, an unauthenticated LLM endpoint exposed in Django (for example, a debug or health route that forwards user input to an LLM without proper access control) can be targeted directly. Attackers can send crafted requests where mutual TLS is not required, attempting role overrides, data exfiltration, or cost exploitation. middleBrick’s LLM/AI Security module checks for such unauthenticated endpoints and validates that sensitive system instructions remain protected against unauthorized influence. Therefore, even with mutual TLS deployed for backend services, developers must ensure that any data derived from the TLS client context is treated as untrusted when building LLM prompts, and that endpoints not requiring client certificates are not allowed to influence model behavior.
Mutual TLS-Specific Remediation in Django — concrete code fixes
To mitigate prompt injection risks in Django when using mutual TLS, treat certificate-derived data as untrusted input and avoid incorporating it directly into LLM prompts. Use the certificate for authentication and access control only, and keep LLM prompt construction separate from identity-derived logic. The following examples demonstrate secure patterns.
1. Configure mutual TLS in Django via middleware and secure request handling
Use Django middleware to extract and validate the client certificate without passing raw identity fields into prompts. Store the certificate fingerprint or a mapped user ID in request attributes for safe downstream use.
import ssl
import hashlib
from django.http import HttpResponse
from django.utils.deprecation import MiddlewareMixin
class MutualTlsMiddleware(MiddlewareMixin):
def process_request(self, request):
cert = request.META.get('SSL_CLIENT_CERT')
if cert:
# Use a stable, non-sensitive identifier derived from the cert
fingerprint = hashlib.sha256(cert.encode('utf-8')).hexdigest()
request.cert_fingerprint = fingerprint
# Map fingerprint to application user/role via a trusted lookup
request.user_role = self.map_fingerprint_to_role(fingerprint)
else:
request.user_role = 'anonymous'
return None
def map_fingerprint_to_role(self, fingerprint):
# Implement your trusted mapping, e.g., from a secure store
mapping = {
'a1b2c3...': 'analyst',
'd4e5f6...': 'admin',
}
return mapping.get(fingerprint, 'guest')
2. Call LLMs without including certificate identity in prompts
When invoking an LLM, rely only on explicitly validated inputs and avoid injecting request context that could be manipulated. Use parameterized prompts and strict input validation.
import openai
from django.conf import settings
def call_llm(user_message, user_role):
# Build a safe system prompt based on role, without injecting raw certificate data
role_prompts = {
'admin': 'You are an administrator with full access to tools.',
'analyst': 'You are an analyst with read-only capabilities.',
'guest': 'You are a guest with limited capabilities.',
}
system_prompt = role_prompts.get(user_role, 'You are a default assistant.')
# Use a vetted client configuration; do not concatenate user or cert data into the prompt
response = openai.ChatCompletion.create(
model='gpt-4o-mini',
messages=[
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_message},
],
max_tokens=100,
temperature=0.2,
)
return response.choices[0].message.content
3. Protect unauthenticated LLM endpoints and apply consistent checks
Ensure endpoints that call LLMs either require mutual TLS or other strong authentication. For public endpoints, avoid using LLM calls or enforce strict rate limiting and input validation. Use tools like middleBrick to validate that system instructions remain protected and that no unauthenticated path can influence model behavior.
# Example safe view that requires authentication before LLM invocation
from django.contrib.auth.decorators import login_required
from django.http import JsonResponse
@login_required
def llm_endpoint(request):
user_message = request.POST.get('message', '')
if not user_message:
return JsonResponse({'error': 'message required'}, status=400)
# Role is derived from authenticated user, not from raw cert fields
result = call_llm(user_message, user_role=request.user.role)
return JsonResponse({'response': result})
4. Validation and mapping best practices
- Never use certificate subject fields, serial numbers, or raw DN strings directly in prompts.
- Map certificate attributes to internal identifiers offline and use those identifiers for access decisions only.
- Log certificate fingerprints for audit purposes, but ensure logs do not leak sensitive identity details used in prompts.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |