Memory Leak in Actix with Api Keys
Memory Leak in Actix with Api Keys — how this specific combination creates or exposes the vulnerability
A memory leak in an Actix web service that handles API keys typically arises when key material is stored in long-lived or shared state without proper cleanup. In Actix, application state is often implemented with web::Data, which is reference-counted and shared across all request handlers. If API keys are cached, validated, or rate-limited in that state (for example in a HashMap or a custom struct) and references are held beyond the request lifecycle, the allocator may not return memory to the OS, leading to gradual growth in RSS under sustained load.
Consider a handler that stores per-client key metadata in application state for quick lookup:
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use std::collections::HashMap;
use std::sync::Mutex;
struct AppState {
key_metadata: Mutex<HashMap<String, KeyInfo>>,
}
struct KeyInfo {
last_seen: std::time::Instant,
// other fields…
}
async fn validate_key(
key: String,
data: web::Data<AppState>,
) -> impl Responder {
let mut map = data.key_metadata.lock().unwrap();
map.entry(key.clone())
.and_modify(|info| info.last_seen = std::time::Instant::now())
.or_insert(KeyInfo {
last_seen: std::time::Instant::now(),
});
HttpResponse::Ok().finish()
}
If the key String is used as the map key and never removed, the map grows indefinitely. Even if you store only metadata, the key strings themselves occupy memory. In a long-running process this can manifest as a steady increase in memory usage, which may be interpreted by middleBrick as a risk factor for resource exhaustion under sustained traffic.
Another common pattern is per-request allocations that are inadvertently captured by closures, preventing deallocation. For example, spawning async tasks that capture the key or large request payloads by reference can pin memory until the task completes, and if tasks are queued or delayed, memory usage accumulates:
async fn handler(key: web::Json<String>, data: web::Data<AppState>) -> impl Responder {
let key_clone = key.to_string();
actix_web::rt::spawn(async move {
// Simulated async work that holds key_clone in scope
// If this queue backs up, memory grows.
});
HttpResponse::Accepted().finish()
}
middleBrick’s scans include checks aligned with Input Validation and Unsafe Consumption. It does not infer root cause but highlights the presence of large or long-lived allocations and hints at patterns that can contribute to memory retention. The scanner also checks for SSRF and other behaviors that may indirectly trigger or amplify resource usage when processing untrusted input such as API keys from external sources.
Remediation focuses on reducing retention: avoid storing keys in long-lived mutable state when possible, prefer short-lived in-request caches, and ensure data held across awaits does not unintentionally extend lifetimes. middleBrick’s findings include prioritized guidance and mapping to frameworks such as OWASP API Top 10 to help contextualize the risk.
Api Keys-Specific Remediation in Actix — concrete code fixes
To mitigate memory leaks while handling API keys in Actix, minimize shared mutable state and ensure timely cleanup. One approach is to use a bounded cache with TTL so entries are evicted automatically. The moka crate provides an async-friendly, thread-safe map with size and time-based eviction:
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use moka::sync::Cache;
use std::sync::Arc;
use std::time::Duration;
struct KeyInfo {
last_seen: std::time::Instant,
// other lightweight metadata
}
fn build_cache() -> Arc<Cache<String, KeyInfo>> {
Arc::new(Cache::builder()
.time_to_live(Duration::from_secs(300))
.max_capacity(10_000)
.build())
}
async fn validate_key(
key: String,
cache: web::Data<Arc<Cache<String, KeyInfo>>>,
) -> impl Responder {
let info = KeyInfo {
last_seen: std::time::Instant::now(),
};
cache.insert(key, info);
HttpResponse::Ok().finish()
}
This pattern avoids unbounded growth: entries older than the TTL are dropped, and max_capacity provides a hard bound. The cache is shared via web::Data as an Arc, which is cheap to clone and does not retain key material beyond the cache entry lifetime.
If you must keep a Mutex<HashMap, ensure removal of stale entries and avoid holding references across awaits:
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
struct KeyEntry {
last_seen: Instant,
}
struct KeyState {
map: Mutex<HashMap<String, KeyEntry>>,
}
async fn cleanup_loop(state: Arc<KeyState>) {
loop {
actix_web::rt::time::sleep(Duration::from_secs(60)).await;
let mut map = state.map.lock().unwrap();
let cutoff = Instant::now() - Duration::from_secs(300);
map.retain(|_key, entry| entry.last_seen > cutoff);
}
}
async fn validate_key(
key: String,
state: web::Data<Arc<KeyState>>,
) -> impl Responder {
let mut map = state.map.lock().unwrap();
map.insert(
key.clone(),
KeyEntry {
last_seen: Instant::now(),
},
);
HttpResponse::Ok().finish()
}
For per-request allocations, avoid spawning tasks that capture the key unless necessary, and prefer moving only the required data. If you must share, wrap in Arc to allow cheap cloning and ensure the original is dropped promptly:
async fn handler(
key: web::Json<String>,
cache: web::Data<Arc<Cache<String, ()>>>,
) -> impl Responder {
let key_ref = Arc::new(key.to_string());
let cache_clone = cache.clone();
let key_clone = Arc::clone(&key_ref);
actix_web::rt::spawn(async move {
// Use key_clone as needed; cache tracks usage if desired
let _ = cache_clone.get(&key_clone);
});
HttpResponse::Accepted().finish()
}
middleBrick’s CLI can validate these patterns by scanning your endpoints. Use the CLI to scan from terminal with middlebrick scan <url> and review findings. The GitHub Action can add API security checks to your CI/CD pipeline, failing builds if risk scores drop below your chosen threshold. The MCP Server lets you scan APIs directly from your AI coding assistant, helping catch problematic state usage early. Dashboard features let you track your API security scores over time to confirm improvements after remediation.