HIGH memory leakaxumjwt tokens

Memory Leak in Axum with Jwt Tokens

Memory Leak in Axum with Jwt Tokens — how this specific combination creates or exposes the vulnerability

A memory leak in an Axum service that uses JWT tokens typically occurs when token payloads or cryptographic verification contexts are retained beyond their request lifetime. In Rust, leaks often arise from improper ownership, static caching, or global structures that accumulate data across requests. When JWT decoding is performed per request and results are stored in long-lived caches without eviction, each request can increase the heap usage subtly. For example, storing decoded claims in a lazy_static or once_cell map keyed by token identifiers without cleanup can cause unbounded growth. Axum extractors that deserialize and retain token payloads into application state or thread-local storage can similarly contribute if the state is not scoped to the request. This is especially relevant when using HS256/RS256 with large custom claims, because the deserialized struct may hold strings or collections that are never dropped. Over time, this increases resident memory and can degrade performance or trigger out-of-memory conditions under sustained load.

Jwt Tokens-Specific Remediation in Axum — concrete code fixes

To mitigate memory leaks when handling JWT tokens in Axum, scope allocations to the request lifecycle and avoid storing decoded data in long-lived caches. Use extractor patterns that parse tokens per request and drop results when the request ends. Prefer references over owned data where possible, and ensure any caches implement size limits and time-based eviction.

Example 1: Safe per-request decoding without caching

use axum::{routing::get, Router, extract::Extension};
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation, TokenData};
use serde::{Deserialize, Serialize};
use std::net::SocketAddr;

#[derive(Debug, Serialize, Deserialize)]
struct Claims {
    sub: String,
    exp: usize,
    // avoid large or unbounded custom fields
    role: String,
}

async fn handler(
    // decode each request without storing globally
    Extension(token): Extension,
) -> String {
    let decoding_key = DecodingKey::from_secret("secret".as_ref());
    let validation = Validation::new(Algorithm::HS256);
    let token_data: TokenData = decode(&token, &decoding_key, &validation)
        .expect("valid token");
    format!("user: {}, role: {}", token_data.claims.sub, token_data.claims.role)
}

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/profile", get(handler))
        .layer(Extension("eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c2VyMSIsImV4cCI6OTk5OTk5OTk5OSwicm9sZSI6InVzZXIifQ.signature".to_string()));
    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

Example 2: Bounded caching with expiration

use axum::Extension;
use jsonwebtoken::{decode, DecodingKey, Validation, TokenData};
use serde::Deserialize;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};

#[derive(Debug, Clone, Deserialize)]
struct Claims {
    sub: String,
    exp: usize,
}

struct CacheEntry {
    data: Claims,
    expires_at: Instant,
}

struct ClaimsCache {
    map: HashMap,
    max_size: usize,
    ttl: Duration,
}

impl ClaimsCache {
    fn new(max_size: usize, ttl: Duration) -> Self {
        Self { map: HashMap::new(), max_size, ttl }
    }

    fn get(&mut self, key: &str) -> Option {
        if let Some(entry) = self.map.get(key) {
            if entry.expires_at > Instant::now() {
                return Some(entry.data.clone());
            }
        }
        None
    }

    fn insert(&mut self, key: String, claims: Claims) {
        if self.map.len() >= self.max_size {
            // simple eviction: remove oldest entries
            let oldest_key = self.map.iter()
                .min_by_key(|(_, e)| e.expires_at)
                .map(|(k, _)| k.clone())
                .unwrap_or_default();
            self.map.remove(&oldest_key);
        }
        self.map.insert(key, CacheEntry {
            data: claims,
            expires_at: Instant::now() + self.ttl,
        });
    }
}

async fn handler_with_cache(
    Extension(token): Extension,
    cache: Extension>>,
) -> String {
    let mut cache = cache.lock().unwrap();
    if let Some(claims) = cache.get(&token) {
        return format!("cached: {}", claims.sub);
    }
    let decoding_key = DecodingKey::from_secret("secret".as_ref());
    let validation = Validation::new(Algorithm::HS256);
    let token_data: TokenData = decode(&token, &decoding_key, &validation)
        .expect("valid token");
    cache.insert(token.clone(), token_data.claims.clone());
    format!("fresh: {}", token_data.claims.sub)
}

#[tokio::main]
async fn main() {
    let cache = Arc::new(Mutex::new(ClaimsCache::new(1000, Duration::from_secs(300))));
    let app = Router::new()
        .route("/cached", get(handler_with_cache))
        .layer(Extension(cache));
    let addr = SocketAddr::from(([127, 0, 0, 1], 3001));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

Frequently Asked Questions

How can I verify my Axum JWT handling does not retain data across requests?
Use per-request extractors for decoding and avoid storing decoded claims in static or application state. Profile memory usage under load and ensure no global HashMap or lazy_static accumulates token data without eviction.
Does middleBrick detect memory leak risks related to JWT handling during scans?
middleBrick scans unauthenticated attack surfaces and includes checks such as Unsafe Consumption that can surface insecure handling patterns. Findings include severity, remediation guidance, and map to frameworks like OWASP API Top 10; scans take 5–15 seconds.