Cache Poisoning in Actix with Mutual Tls
Cache Poisoning in Actix with Mutual Tls — how this specific combination creates or exposes the vulnerability
Cache poisoning occurs when an attacker tricks a cache (e.g., CDN, reverse proxy, or application-level cache) into storing malicious content under a legitimate key. In Actix, if responses are cached based on insufficient request validation and the server uses Mutual TLS (mTLS) for client authentication, the combination can expose or amplify cache poisoning risks.
With mTLS, the server validates the client certificate before processing the request. However, caching decisions are often made after TLS-level authentication, and developers may assume that mTLS alone prevents cache poisoning. This assumption is dangerous because mTLS does not canonicalize or normalize request inputs used to construct cache keys. If Actix uses request attributes like headers, query parameters, or body fragments to form cache keys without strict validation, an authenticated client with a valid certificate can submit crafted inputs that lead to distinct cache entries for semantically equivalent requests (e.g., varying header casing, parameter ordering, or injected cache-control directives).
For example, an attacker with a valid client certificate might send GET /api/resource?user_id=123 and then GET /api/resource?user_id=123&profile=1. If the caching layer treats these as different keys, poisoned content for the second key may be served to other users who share the same logical resource. Actix middleware that caches based on the full URI, including unnormalized query strings, can therefore store variant-specific responses that should have been normalized or excluded. Additionally, if the application caches user-specific or role-specific responses (e.g., based on certificate-derived claims) without segregating cache keys by authorization context, one client may receive another client’s cached data, leading to information exposure or privilege bypass.
Moreover, cache poisoning via mTLS-enabled endpoints can exploit headers that are trusted after TLS client verification. If Actix caches responses with Vary headers that include client certificate details or custom headers set after mTLS authentication, an attacker can manipulate these headers to poison cache entries. For instance, injecting a X-Cache-Key header that influences caching behavior could cause the server to store a malicious variant. Because mTLS ensures the client is authenticated, developers might inadvertently place greater trust in request metadata, making it easier to overlook input validation and canonicalization for cache key construction.
Real-world attack patterns mirror known OWASP API Top 10 API04:2023 'Improper Cache Control' and techniques described in CVE-2021-28657, where cache key mishandling led to content injection. In an Actix service using mTLS, the risk is elevated when caching is applied without normalizing inputs or segregating contexts, as authenticated requests are assumed safe and cached without sufficient scrutiny.
Mutual Tls-Specific Remediation in Actix — concrete code fixes
To mitigate cache poisoning in Actix with mTLS, focus on canonicalizing cache keys, validating all inputs used for caching, and ensuring cache controls are consistent regardless of mTLS authentication. Below are concrete code examples using Actix-web with Rust, including proper mTLS configuration and cache-safe request handling.
1. Configure Mutual TLS in Actix
Set up Actix to require and validate client certificates. Use rustls to configure the server with CA verification.
use actix_web::{App, HttpServer, middleware::Logger};
use actix_web::web::Data;
use std::sync::Arc;
use rustls::{ServerConfig, Certificate, PrivateKey};
use rustls_pemfile::{certs, pkcs8_private_keys};
use std::io::{BufReader, Cursor};
async fn load_rustls_config(ca_file: &str, cert_file: &str, key_file: &str) -> std::io::Result<Arc<ServerConfig>> {
let mut cert_reader = BufReader::new(std::fs::File::open(ca_file)?);
let ca_certs = certs(&mut cert_reader)?
.into_iter()
.map(Certificate)
.collect();
let mut key_reader = BufReader::new(std::fs::File::open(key_file)?);
let mut keys = pkcs8_private_keys(&mut key_reader)?;
let server_key = PrivateKey(keys.remove(0));
let mut server_config = ServerConfig::builder()
.with_safe_defaults()
.with_client_auth_cert(ca_certs, server_key)?;
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec()];
Ok(Arc::new(server_config))
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let tls_config = load_rustls_config("ca.pem", "server.pem", "server.key").expect("failed to load TLS config");
HttpServer::new(move || {
App::new()
.wrap(Logger::default())
.app_data(Data::new(tls_config.clone()))
.route("/api/resource", actix_web::web::get().to(secure_handler))
})
.bind_rustls("127.0.0.1:8443", tls_config)?
.run()
.await
}
2. Canonicalize Cache Keys and Validate Inputs
Ensure cache keys are built from normalized request attributes, excluding attacker-controllable variations. Do not rely solely on the raw URI.
use actix_web::{web, HttpRequest, HttpResponse};
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
fn canonical_cache_key(req: &HttpRequest) -> u64 {
let mut hasher = DefaultHasher::new();
// Normalize method and path
req.method().as_str().hash(&mut hasher);
req.path().hash(&mut hasher);
// Normalize query: sort parameters and exclude tracking/noise params
if let Some(query) = req.query_string().split('?').nth(1) {
let mut params: Vec<&str> = query.split('&').collect();
params.sort_unstable();
for p in params {
if let Some((k, v)) = p.split_once('=') {
// Exclude session-like or client-provided volatile keys
if !["token", "session", "callback"].contains(&k) {
k.hash(&mut hasher);
v.hash(&mut hasher);
}
}
}
}
// Optionally include selected user context from mTLS claims (e.g., certificate fingerprint)
// This must be explicitly set by authenticated middleware after validation.
if let Some(cert_fprint) = req.headers().get("X-Cert-Fingerprint") {
cert_fprint.to_str().unwrap_or("").hash(&mut hasher);
}
hasher.finish()
}
async fn secure_handler(req: HttpRequest) -> HttpResponse {
let key = canonical_cache_key(&req);
// Use `key` to look up or store in a cache (e.g., Redis, in-memory)
HttpResponse::Ok().body(format!("CacheKey:{}", key))
}
3. Set Safe Cache-Control and Vary Headers
Explicitly define caching behavior to avoid storing sensitive or user-specific responses. Use Vary carefully and avoid including volatile mTLS-derived metadata unless strictly necessary.
use actix_web::{web, HttpResponse};
async fn cached_resource(req: HttpRequest) -> HttpResponse {
// After validating mTLS and authorization, set cache headers that prevent poisoning
HttpResponse::Ok()
.insert_header(("Cache-Control", "public, max-age=3600, must-revalidate"))
.insert_header(("Vary", "X-Request-ID")) // Use a stable, non-client-controlled header
.body("Standardized response")
}
4. Validate and Sanitize Inputs Before Caching
Never trust query parameters or headers that influence caching. Validate and sanitize them, and normalize representations before using them in cache logic.
fn normalize_query_param(value: &str) -> String {
// Example: lowercase, trim, remove control chars
value.trim().to_lowercase().chars().filter(|c| c.is_alphanumeric() || *c == '-' || *c == '_').collect()
}
// Use normalized values when building cache keys or storing responses.