HIGH double freedjangobearer tokens

Double Free in Django with Bearer Tokens

Double Free in Django with Bearer Tokens — how this specific combination creates or exposes the vulnerability

Double Free is a memory management vulnerability typically associated with languages that expose manual memory operations, such as C or C++. In the context of Django and Bearer Tokens, the term is used metaphorically to describe a logic condition where a token is accepted or validated more than once for the same authorization decision, effectively treating the same proof of possession as valid for multiple access grants or scopes. This can occur when token validation is performed in multiple, non-coordinated layers or when cached validation results are reused without rechecking revocation or usage context.

When Bearer Tokens are handled naively in Django—such as via custom middleware or utility functions that call a validation routine and then later re-validate the same token for routing or scope checks without a strict one-time-use ledger—the same token may satisfy multiple authorization checks. For example, a developer might decode a JWT in an authentication middleware, attach the payload to the request, and later perform an additional permission check that again decodes or introspects the token without ensuring the token has already been consumed for that request. This does not corrupt memory, but it can lead to privilege escalation or unauthorized access where a token intended for a single service boundary is accepted at multiple boundaries within the same application.

Django’s typical Bearer Token flow involves the Authorization: Bearer <token> header. If token validation is implemented using libraries such as PyJWT or an OAuth2 introspection endpoint, and the developer does not enforce a single validation point per request, the token may pass independent checks that each appear correct but together violate the intended one-time-use semantics. Consider a scenario where an API endpoint first authenticates the token via a JWT library and later performs an additional scope check by re-parsing the token or calling an introspection endpoint without storing that the token has already been validated. The token is not freed or invalidated after the first use, so it remains logically available for subsequent checks, effectively creating a “double validation” condition.

This pattern is more likely when developers use caching to store decoded token payloads and then reuse cached data across multiple authorization decisions without verifying revocation status or a per-request usage flag. For instance, if a token payload is cached globally and multiple permission functions read from that cache, no explicit mechanism enforces that the token’s authorization is applied only once per request. An attacker who obtains a Bearer Token might exploit this by chaining multiple API calls that each trigger validation, potentially gaining access to broader scopes or resources that should have been gated behind distinct authorization steps.

In practice, this vulnerability does not stem from a memory bug but from an authorization logic flaw that mirrors the conceptual outcome of a Double Free: the same token is allowed to satisfy multiple independent authorization checks, weakening the boundary between authentication, scope enforcement, and resource access. Mitigation requires ensuring that token validation is centralized, that validated state is tracked per request, and that subsequent authorization steps reference the already-validated context rather than re-evaluating the raw token.

Bearer Tokens-Specific Remediation in Django — concrete code fixes

To remediate the double validation pattern with Bearer Tokens in Django, centralize token validation and ensure that once a token is accepted, subsequent authorization steps use the validated context rather than re-parsing or re-introspecting the token. Below are concrete code examples demonstrating a robust approach using JWTs and a request-scoped validation state.

First, define a middleware that validates the Bearer Token once and attaches the payload to the request. Use a flag to indicate that validation has occurred, preventing redundant validation in downstream checks.

import jwt
from django.http import JsonResponse
from django.utils.deprecation import MiddlewareMixin
class BearerTokenMiddleware(MiddlewareMixin):
    def process_request(self, request):
        auth = request.META.get('HTTP_AUTHORIZATION', '')
        if auth.startswith('Bearer '):
            token = auth.split(' ')[1]
            # Validate once and store payload
            try:
                payload = jwt.decode(token, options={'verify_signature': True})
                request.token_payload = payload
                request.token_validated = True
            except jwt.ExpiredSignatureError:
                return JsonResponse({'error': 'token_expired'}, status=401)
            except jwt.InvalidTokenError:
                return JsonResponse({'error': 'invalid_token'}, status=401)
        else:
            return JsonResponse({'error': 'missing_token'}, status=401)

Next, in views or permission classes, rely on request.token_payload and request.token_validated instead of decoding the token again. For scope checks, read from the already-decoded payload.

from django.http import JsonResponse
def protected_view(request):
    if not getattr(request, 'token_validated', False):
        return JsonResponse({'error': 'unauthorized'}, status=401)
    scopes = request.token_payload.get('scopes', [])
    if 'read:resource' not in scopes:
        return JsonResponse({'error': 'insufficient_scope'}, status=403)
    return JsonResponse({'data': 'secure data'})

If you use a third-party introspection endpoint, store the introspection result once per request and reuse it. Avoid calling the introspection endpoint multiple times within the same request lifecycle.

import requests
from django.conf import settings
def introspect_once(token, request):
    if hasattr(request, '_introspection'):
        return request._introspection
    resp = requests.post(
        settings.INTROSPECTION_URL,
        data={'token': token},
        auth=(settings.INTROSPECTION_CLIENT_ID, settings.INTROSPECTION_CLIENT_SECRET),
    )
    result = resp.json()
    request._introspection = result
    return result
def validate_bearer(request):
    auth = request.META.get('HTTP_AUTHORIZATION', '')
    if auth.startswith('Bearer '):
        token = auth.split(' ')[1]
        introspect = introspect_once(token, request)
        if not introspect.get('active'):
            from django.http import JsonResponse
            return JsonResponse({'error': 'token_inactive'}, status=401)
        request.token_payload = introspect
        request.token_validated = True
    else:
        from django.http import JsonResponse
        return JsonResponse({'error': 'missing_token'}, status=401)

By ensuring a single validation point and attaching state to the request, you eliminate the conditions that could lead to a double validation pattern. This approach aligns with secure handling of Bearer Tokens in Django and reduces the risk of authorization logic flaws that mimic the effects of a Double Free.

Frequently Asked Questions

How can I test if my Django app is vulnerable to token double validation?
Send the same Bearer Token in multiple API calls within a single session and check whether each call independently validates the token or whether validation state is shared. Review middleware and permission code to ensure token decoding occurs only once per request and that cached payloads are not reused across distinct authorization boundaries.
Does middleBrick analyze Bearer Token validation logic for double validation patterns?
middleBrick scans unauthenticated attack surfaces and maps findings to frameworks such as OWASP API Top 10, but it does not inspect internal application logic such as token validation state. Use the CLI or Dashboard to retrieve scan reports and apply remediation guidance to harden your authorization flow.