Memory Leak in Actix with Hmac Signatures
Memory Leak in Actix with Hmac Signatures — how this specific combination creates or exposes the vulnerability
In Actix-web, using HMAC signatures for request authentication can introduce a memory leak when cryptographic operations allocate buffers on each request and those buffers are not explicitly released. This typically occurs when the application creates per-request HMAC state (e.g., via hmac::Hmac<Sha256>) and large temporary buffers are retained in memory due to how futures and streams are composed.
Consider an Actix handler that computes an HMAC for every incoming request:
use actix_web::{web, HttpResponse, Result};
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
async fn auth_handler(
body: String,
key: web::Data<Vec<u8>>
) -> Result<HttpResponse> {
let mut mac = HmacSha256::new_from_slice(&key).map_err(|_| {
actix_web::error::ErrorInternalServerError("HMAC init failed")
})?;
mac.update(body.as_bytes());
let result = mac.finalize();
let code = result.into_bytes();
// If this handler is invoked frequently and the request bodies are large,
// intermediate buffers tied to `mac` and `body` may remain referenced
// longer than necessary, contributing to heap growth.
Ok(HttpResponse::Ok().finish())
}
In this pattern, the body and the HMAC computation buffers may be retained in memory across many invocations if the futures are not properly dropped or if the Actix runtime holds references in its task system. This becomes a memory leak under sustained load: each request adds small objects that are never returned to the allocator promptly, increasing RSS over time.
The risk is higher when the HMAC key is large or when the handler processes large payloads. middleBrick’s scans detect this pattern during the Unsafe Consumption and Input Validation checks, noting that unbounded per-request allocations without pooling or explicit cleanup contribute to the security risk score. A high-risk finding here does not mean data is exposed, but that resource exhaustion could lead to denial of service.
Additionally, if the application caches HMAC results or keys in static structures without size limits, the leak compounds. For example:
use once_cell::sync::Lazy;
use std::sync::Mutex;
static CACHE: Lazy<Mutex<std::collections::HashMap<String, Vec<u8>>>> = Lazy::new(|| Mutex::new(std::collections::HashMap::new()));
fn cached_hmac(key: &[u8], data: &str) -> Vec<u8> {
let mut cache = CACHE.lock().unwrap();
cache.entry(format!("{:x?}", key))
.or_insert_with(|| {
let mut mac = HmacSha256::new_from_slice(key).unwrap();
mac.update(data.as_bytes());
mac.finalize().into_bytes().to_vec()
})
.clone()
}
If the cache grows unbounded, memory usage grows indefinitely. middleBrick’s Inventory Management check flags unbounded caches as a finding, and its Data Exposure checks verify whether sensitive key material might persist in memory longer than intended.
Hmac Signatures-Specific Remediation in Actix — concrete code fixes
To mitigate memory leaks when using HMAC signatures in Actix, focus on reducing per-request allocations and ensuring timely release of buffers. Use stack-able, bounded utilities and avoid retaining large objects in async scopes.
1) Reuse the HMAC instance where possible and avoid recreating large buffers on every call. Use a thread-local or request-local buffer with a size cap:
use actix_web::{dev::Payload, Error, FromRequest, HttpRequest, web};
use futures_util::future::{ok, Ready};
use hmac::{Hmac, Mac};
use sha2::Sha256;
use std::pin::Pin;
type HmacSha256 = Hmac<Sha256>;
struct HmacExtractor {
code: Vec<u8>,
}
impl FromRequest for HmacExtractor {
type Error = Error;
type Future = Ready<Result<HmacExtractor, Error>>;
type Config = ();
fn from_request(req: &HttpRequest, payload: &mut Payload) -> Self::Future {
// Limit the size of the body read to prevent large allocations
let body = match web::Bytes::from_request(req, payload).wait() {
Ok(b) => b,
Err(_) => return ok(Err(actix_web::error::ErrorBadRequest("invalid body"))),
};
// Bound check: reject overly large payloads early
if body.len() > 1_048_576 { // 1 MiB cap
return ok(Err(actix_web::error::ErrorPayloadTooLarge()));
}
let key = req.app_data<web::Data<Vec<u8>>>().unwrap().0.as_slice();
let mut mac = HmacSha256::new_from_slice(key).unwrap();
mac.update(&body);
let code = mac.finalize().into_bytes().to_vec();
ok(Ok(HmacExtractor { code }))
}
}
This approach caps request size and avoids unbounded growth. The HMAC computation is scoped to the request lifetime and dropped immediately after extraction.
2) Use a bounded cache with eviction for HMAC keys or results to prevent unbounded memory growth:
use lru::LruCache;
use once_cell::sync::Lazy;
use std::num::NonZeroUsize;
use std::sync::Mutex;
static KEY_CACHE: Lazy<Mutex<LruCache<Vec<u8>, Vec<u8>>>> = Lazy::new(|| {
Mutex::new(LruCache::new(NonZeroUsize::new(128).unwrap()))
});
fn cached_hmac_limited(key: &[u8], data: &str) -> Vec<u8> {
let mut cache = KEY_CACHE.lock().unwrap();
let entry = cache.get(key).cloned();
if let Some(c) = entry {
return c;
}
let mut mac = HmacSha256::new_from_slice(key).unwrap();
mac.update(data.as_bytes());
let code = mac.finalize().into_bytes().to_vec();
cache.put(key.to_vec(), code.clone());
code
}
An LRU cache bounds memory usage and automatically evicts old entries. middleBrick’s Pro plan’s continuous monitoring can alert you if cache sizes approach configured limits, helping you maintain stable memory footprints.
3) Prefer shorter-lived, zero-copy abstractions and avoid storing Hmac objects across async await points unnecessarily. If you use Actix’s app data to store key material, ensure it is a lightweight reference (e.g., web::Data<Arc<[u8]>>) rather than duplicating large buffers.
By combining size caps, bounded caches, and timely dropping of cryptographic state, you reduce the memory footprint and eliminate the leak pattern that would otherwise be flagged by middleBrick’s scanning and runtime checks.