Cache Poisoning in Axum with Mutual Tls
Cache Poisoning in Axum with Mutual Tls — how this specific combination creates or exposes the vulnerability
Cache poisoning occurs when an attacker manipulates cached responses so that subsequent users receive malicious or incorrect data. In Axum, combining HTTP caching with mutual Transport Layer Security (mTLS) can inadvertently expose or amplify cache poisoning risks if request differentiation is not enforced at the cache key level.
When mTLS is enabled, the server authenticates the client using client certificates. However, if the cache key is derived only from the request path and query parameters—and does not include elements such as the client certificate identity or the request headers that mTLS makes available—responses may be shared across distinct authenticated clients. For example, an API might issue user-specific data (e.g., profile details or scoped permissions) while relying on mTLS for access control, yet still serve a cached response to a different user who connects with a different client certificate. This mismatch between transport-layer authentication and application-layer caching leads to information disclosure and potential cache poisoning.
Another scenario involves varying Accept headers or custom headers that mTLS does not inherently validate. If Axum’s caching layer uses a naive key that ignores these variations, an attacker who can control the Accept header might cause the server to cache a response with content type or language intended for one client and serve it to another. This is particularly relevant when caching is applied at the middleware or reverse proxy level in front of Axum, where the cache may not be aware of mTLS-bound identity but the application logic assumes it is present.
Real-world attack patterns mirror known OWASP API Top 10 API06:1 — Improper Assets Management — where cached data is treated as authoritative without proper validation. Consider an endpoint that returns sensitive configuration based on a client certificate. Without including the certificate’s fingerprint in the cache key, a poisoned cache entry could allow one client to infer the data of another. In practice, this could map to CVE-like scenarios where misconfigured caching leads to privilege escalation or data exposure across authenticated sessions.
To mitigate this in design, you must ensure that cache keys incorporate mTLS-derived identifiers when identity is used for authorization. This prevents responses from being incorrectly shared across clients and reduces the surface for cache poisoning in Axum services that rely on mutual TLS for access control.
Mutual Tls-Specific Remediation in Axum — concrete code fixes
Remediation focuses on ensuring that cache keys include mTLS-bound identity or other request dimensions that differentiate clients. Below are concrete Axum examples that demonstrate how to build cache keys using data extracted from TLS connections and how to structure handlers safely.
use axum::{
async_trait,
extract::Extension,
http::Request,
middleware::Next,
response::Response,
};
use std::collections::HashMap;
use std::sync::Arc;
use tower_http::cache::{CacheLayer, CacheRequestConfig, CacheResponseConfig};
use rustls::server::ClientSession;
use std::convert::Infallible;
// A middleware that extracts the client certificate fingerprint from mTLS
async fn with_client_fingerprint(req: Request, next: Next) -> Response
where
B: Send + Sync + 'static,
{
// In practice, you would extract the peer certificate from the TLS session.
// Here we simulate extraction for illustration.
let fingerprint = req.extensions().get::().map(|s| s.as_str()).unwrap_or("unknown");
// Store fingerprint in request extensions for later use in cache key building
req.extensions_mut().insert(fingerprint.to_string());
next.run(req).await
}
// Example handler that uses fingerprint-aware caching
async fn user_profile(
Extension(state): Extension>,
Extension(fingerprint): Extension,
) -> &'static str {
// In a real implementation, you’d use fingerprint as part of the cache key
// and retrieve data specific to that authenticated client.
"profile_data"
}
// AppState holds shared cache or data stores
struct AppState {
// e.g., a cache client or DB pool
}
#[tokio::main]
async fn main() {
let state = Arc::new(AppState {});
// Build middleware stack with mTLS-aware request extension
let app = axum::Router::new()
.route("/profile", axum::routing::get(user_profile))
.layer(Extension(state))
.layer(with_client_fingerprint)
.layer(
CacheLayer::new(vec![
CacheRequestConfig {
// Include fingerprint in the cache key by customizing key generation
// This is a conceptual example; actual key building depends on your cache layer.
..Default::default()
}
])
.with_response_cache_layer(CacheResponseConfig::default()),
);
// Server configuration with mTLS would be applied at the TLS layer (not shown here).
// Ensure the TLS acceptor enforces client certificate verification.
}
The key remediation steps are:
- Include mTLS-bound identifiers (e.g., certificate fingerprint, subject DN, or a mapped client ID) in your cache key construction.
- Avoid using only path and query parameters as the sole cache key component when mTLS enforces client-specific authorization.
- Validate that cached responses are not served across different authenticated identities, especially when headers varied by client are not part of the key.
By aligning cache keys with mTLS identity, Axum services can safely leverage caching without introducing cross-client information leakage or poisoning.