Replay Attack in Axum with Api Keys
Replay Attack in Axum with Api Keys — how this specific combination creates or exposes the vulnerability
A replay attack in an Axum service that relies solely on API keys occurs when an attacker intercepts a valid request and re-sends it to the same endpoint to achieve unauthorized access or cause unintended effects. Because API keys are typically static credentials transmitted with each request, they do not inherently prevent replays unless additional protections are implemented. In Axum, handlers often read the API key from headers, and if the application does not include a nonce, timestamp, or other per-request uniqueness, an attacker can capture a signed request—such as a financial transfer or privileged configuration change—and replay it to the server to reproduce the original effect.
The vulnerability is contextual: API keys provide authentication (who you are) but not freshness or integrity of the request itself. For example, an API key sent in a header like X-API-Key can be extracted from a network trace or compromised log and reused. If the backend does not validate a one-time token, a short-lived timestamp, or a cryptographic nonce, the server treats the replayed request as legitimate. This becomes especially dangerous when the operation is non-idempotent or when authorization decisions are made based only on the key without additional checks.
From a scanning perspective, middleBrick tests for this class of issue under BOLA/IDOR and Input Validation checks. It inspects whether endpoints that accept API keys incorporate replay-preventing mechanisms such as timestamps, nonces, or idempotency keys, and whether they reject requests with reused or out-of-window values. The tool also flags endpoints that accept sensitive operations without additional per-request context, highlighting the need for replay resistance in designs that rely on static credentials like API keys.
Api Keys-Specific Remediation in Axum — concrete code fixes
To mitigate replay attacks in Axum when using API keys, you should introduce per-request uniqueness and server-side validation. Two widely applicable approaches are idempotency keys and timestamp/nonce validation. Below are concrete, idiomatic Axum examples that demonstrate how to implement these protections.
1) Idempotency key pattern
Require clients to send a unique idempotency key for state-changing operations. Store processed keys for a reasonable TTL and reject duplicates.
use axum::{{
async_trait, body::Body, extract::FromRequest, http::{request::Parts, StatusCode},
response::IntoResponse, routing::post, Json, Router,
}};
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::Mutex;
#[derive(serde::Deserialize)]
struct CreatePaymentRequest {
amount: u64,
// client-supplied idempotency key, e.g., a UUID
idempotency_key: String,
}
struct IdempotencyKey(Arc>>);
#[async_trait]
impl FromRequest for IdempotencyKey
where
S: Send + Sync,
{
type Rejection = (StatusCode, &'static str);
async fn from_request(req: Parts, _state: &S) -> Result {
let key = req.headers.get("Idempotency-Key")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_string())
.ok_or((StatusCode::BAD_REQUEST, "missing Idempotency-Key header"))?;
Ok(IdempotencyKey(Arc::clone(&KEYS)))
}
}
lazy_static::lazy_static! {
static ref KEYS: Arc>> = Arc::new(Mutex::new(HashSet::new()));
}
async fn create_payment(
Json(payload): Json,
IdempotencyKey(keys): IdempotencyKey,
) -> impl IntoResponse {
let mut guard = keys.lock().await;
if guard.contains(&payload.idempotency_key) {
return (StatusCode::CONFLICT, "duplicate idempotency key").into_response();
}
guard.insert(payload.idempotency_key);
// process payment
(StatusCode::CREATED, "payment accepted")
}
fn app() -> Router {
Router::new()
.route("/payments", post(create_payment))
}
2) Timestamp + API key freshness validation
Require a timestamp (or nonce) header and reject requests with timestamps outside an allowed window (for example, ±2 minutes). Combine this with your API key lookup to ensure freshness.
use axum::{{
async_trait, extract::FromRequest, http::{request::Parts, StatusCode},
response::IntoResponse, routing::post, Json, Router,
}};
use std::time::{SystemTime, UNIX_EPOCH};
async fn validate_freshness(
api_key: &str,
timestamp_header: &str,
) -> Result<(), (StatusCode, &'static str)> {
// Verify api_key against your store first (omitted for brevity)
let ts = timestamp_header.parse::().map_err(|_| (StatusCode::BAD_REQUEST, "invalid timestamp"))?;
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
const ALLOWED_DRIFT: u64 = 120; // ±2 minutes
if ts + ALLOWED_DRIFT < now || ts > now + ALLOWED_DRIFT {
return Err((StatusCode::FORBIDDEN, "stale request"));
}
Ok(())
}
#[derive(serde::Deserialize)]
struct SensitiveRequest {
api_key: String,
timestamp: String,
data: String,
}
async fn handle_sensitive(
Json(payload): Json,
) -> impl IntoResponse {
match validate_freshness(&payload.api_key, ¶m.timestamp) {
Ok(_) => {
// proceed with key-based auth and business logic
(StatusCode::OK, "accepted")
}
Err(e) => e.into_response(),
}
}
fn app() -> Router {
Router::new()
.route("/secure", post(handle_sensitive))
}
In both examples, API keys remain the primary credential, but replay resistance is achieved by adding either an idempotency key or a timestamp/nonce with tight validation. middleBrick’s scans verify the presence and correct application of such controls under its BOLA/IDOR and Input Validation checks.