Memory Leak in Axum with Api Keys
Memory Leak in Axum with Api Keys — how this specific combination creates or exposes the vulnerability
A memory leak in an Axum service that uses API keys typically arises when key validation or key-to-permission mapping retains references longer than necessary. In a black-box scan, middleBrick’s Property Authorization and Input Validation checks can surface indicators such as unbounded growth in in-memory caches or repeated allocations per request that correlate with leaked key material. For example, storing resolved key metadata (scopes, owner, rate-limit counters) in a static RwLock<HashMap<_, _>> without eviction logic can keep request-scoped data alive, increasing RSS over time. If authorization logic re-creates per-request data structures on every call and drops them incompletely, the runtime may retain fragments that accumulate across concurrent requests, a pattern middleBrick flags under BFLA/Privilege Escalation when key-related state is not bounded.
When API keys are embedded in request headers and processed on every route, Axum extractors can inadvertently extend lifetime of buffers or hold references through closures. A common anti-pattern is capturing a large key payload or parsed claims by reference in a handler that feeds downstream middleware or logging, preventing timely deallocation. middleBrick’s Data Exposure checks may reveal unusually large response payloads or repeated allocations tied to authentication paths, hinting at inefficient key handling. In setups where key validation performs regex or cryptographic work per request without pooling, CPU pressure and allocator churn can indirectly stress the runtime and amplify leak symptoms under sustained traffic.
Moreover, if API key material is serialized and cached for introspection or audit, missing drop hooks or weak-reference patterns can keep deserialized structures resident. This is especially relevant when OpenAPI/Swagger spec analysis (2.0, 3.0, 3.1) shows that key-related models are reused across many endpoints, but runtime findings reveal that certain paths do not clean up associated metadata. middleBrick’s spec-based cross-referencing can highlight mismatches between documented key usage and observed allocations, supporting the Inventory Management check by surfacing endpoints where key-related state is not properly bounded or released.
Api Keys-Specific Remediation in Axum — concrete code fixes
To address memory concerns while keeping API key validation in Axum, prefer small, copyable key representations and avoid retaining references beyond the request lifetime. Use extractor patterns that deserialize keys into owned, bounded structures and drop them explicitly when no longer needed. The following example shows a robust approach with typed key extraction, scoped authorization, and bounded caching that minimizes long-lived allocations.
use axum::{{
async_trait,
extract::{FromRequest, Request},
response::Response,
}};
use std::{collections::HashMap, sync::{Arc, RwLock}};
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
struct ApiKey(String);
#[derive(Clone, Debug)]
struct KeyPermissions {
scopes: Vec,
rate_limit: u32,
}
#[derive(Clone)]
struct KeyStore(Arc>>);
// Bounded extractor that clones only necessary data and avoids holding references.
struct AuthenticatedKey {
key: ApiKey,
perms: KeyPermissions,
}
#[async_trait]
impl FromRequest for AuthenticatedKey
where
S: Send + Sync,
{
type Rejection = Response;
async fn from_request(req: Request, state: &S) -> Result {
let key_header = req.headers()
.get("X-API-Key")
.and_then(|v| v.to_str().ok())
.map(|s| ApiKey(s.to_string()))
.ok_or_else(|| Response::builder().status(400).body("Missing key".into()).unwrap())?;
// Clone only the permissions, not the full request data.
let store = state.downcast_ref::()
.ok_or_else(|| Response::builder().status(500).body("Internal state missing".into()).unwrap())?;
let map = store.0.read().unwrap();
let perms = map.get(&key_header)
.cloned()
.ok_or_else(|| Response::builder().status(403).body("Invalid key".into()).unwrap())?;
// Key and permissions are small, owned structs; they will be dropped at end of request.
Ok(AuthenticatedKey { key: key_header, perms })
}
}
// Handler that uses the extractor and avoids capturing large context.
async fn handler(key: AuthenticatedKey) -> &'static str {
// Perform authorization using perms, without retaining key beyond this function.
if key.perms.scopes.contains(&"read:data".to_string()) {
"OK"
} else {
"Forbidden"
}
}
// Example of bounded caching to avoid unbounded growth.
async fn rotate_store(new_keys: HashMap) {
let store = KeyStore(Arc::new(RwLock::new(new_keys.into_iter().map(|(k, v)| {
(ApiKey(k), v)
}).collect())));
// Replace global store atomically; old store is dropped, releasing its memory.
}
In this pattern, the extractor owns its data and releases it when the request ends, reducing the risk of prolonged retention. For high-throughput services, consider using a concurrent LRU cache with size limits instead of an unbounded HashMap, and ensure that any background tasks or observability hooks do not accidentally hold references to key-related objects. middleBrick’s Rate Limiting and Unsafe Consumption checks can validate that per-request allocations remain bounded and that no unchecked deserialization retains oversized buffers.
Additionally, verify that your OpenAPI/Swagger definitions accurately reflect key scopes and that runtime findings align with documented behavior. middleBrick’s spec-aware analysis helps identify endpoints where key metadata may be over-fetched or where authorization logic performs redundant work per invocation. By combining small, owned data structures, bounded caches, and regular rotation of the key store, you can mitigate memory retention issues specific to API key handling in Axum.