HIGH replay attackactixcockroachdb

Replay Attack in Actix with Cockroachdb

Replay Attack in Actix with Cockroachdb — how this specific combination creates or exposes the vulnerability

A replay attack in an Actix-based service that uses CockroachDB as the backend can occur when an attacker intercepts a valid request—such as an HTTP POST that transfers funds or changes a user state—and re-sends it to the same endpoint to produce the same effect multiple times. Because Actix handlers are typically stateless and rely on application-level identifiers, timestamps, or idempotency keys to prevent replays, weaknesses in how these protections are implemented can be exploited. CockroachDB, a distributed SQL database that provides strong consistency and serializable isolation, does not inherently prevent duplicate commits; it will happily apply the same transaction twice if the application does not enforce uniqueness.

Consider an endpoint that processes a payment without a server-side nonce or a unique constraint on a business identifier such as transaction_id. An attacker can capture a request containing a user identifier, amount, and timestamp, then replay it. Under serializable isolation, CockroachDB will execute the transaction, and if the application does not check whether that transaction_id has already been applied, the second commit will also succeed. This is common when idempotency is implemented only as a client-side counter or a short-lived cache that does not survive restarts or is not consulted on every write.

The risk is amplified when requests include authentication tokens that are long-lived or when replayed requests hit a different Actix worker that does not share in-memory deduplication state. CockroachDB’s distributed nature means writes are replicated across nodes; without application-side deduplication, there is no single source of truth that prevents the same logical operation from being applied in multiple regions. For example, an attacker might replay a request that creates a reservation or transfers tokens. Because CockroachDB ensures linearizable writes, the duplicate transaction will appear to succeed, leading to double-spending or double-booking unless the application enforces uniqueness via primary keys or unique constraints.

Insecure deserialization or missing integrity checks on the request body can also enable replay. If an Actix handler deserializes JSON into a struct and passes it directly to CockroachDB without verifying a hash or signature of the payload, an attacker can slightly modify non-security fields (e.g., changing a timestamp) while preserving the semantic intent, bypassing naive freshness checks. Without server-side nonces, one-time tokens, or cryptographic signatures, the combination of Actix’s routing and CockroachDB’s consistency can unintentionally amplify the impact of replayed requests.

Cockroachdb-Specific Remediation in Actix — concrete code fixes

To mitigate replay attacks in an Actix service using CockroachDB, implement idempotency keys with uniqueness constraints and verify state before committing transactions. The following patterns assume you are using sqlx with CockroachDB and Actix web.

1. Enforce uniqueness at the database layer

Create a table with a unique constraint on the business-level identifier (e.g., transaction_id) so that duplicate replays fail instead of silently succeeding.

-- CockroachDB SQL
CREATE TABLE payments (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    transaction_id STRING NOT NULL UNIQUE,
    user_id UUID NOT NULL,
    amount DECIMAL(12, 2) NOT NULL,
    status STRING NOT NULL DEFAULT 'pending',
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

2. Idempotent handler in Actix with explicit checks

Use a deterministic idempotency key from the request (e.g., a client-supplied Idempotency-Key header) and perform a conditional insert within a transaction. If the key already exists, return the original response instead of applying the operation again.

use sqlx::postgres::PgRow;
use sqlx::Row;
use actix_web::{web, HttpResponse, Error};

async fn create_payment(
    pool: web::Data<sqlx::PgPool>,
    key: web::Header<actix_web::http::header::HeaderValue>,
    body: web::Json<PaymentRequest>,
) -> Result<HttpResponse, Error> {
    let key = key.to_str().map_err(|_| HttpResponse::BadRequest().finish())?;
    let req = body.into_inner();

    let mut tx = pool.begin().await?;

    // Check if idempotency key was already processed
    let existing: Result<PgRow, _> = sqlx::query(
        "SELECT status FROM payments WHERE transaction_id = $1"
    )
    .bind(key)
    .fetch_optional(&mut *tx)
    .await;

    match existing {
        Ok(Some(row)) => {
            let status: String = row.get("status");
            tx.commit().await?;
            return Ok(HttpResponse::Ok().json(json!({ "status": status })));
        }
        Ok(None) => {
            // Insert the new payment atomically
            sqlx::query(
                r#"INSERT INTO payments (transaction_id, user_id, amount, status) VALUES ($1, $2, $3, $4)"#
            )
            .bind(key)
            .bind(req.user_id)
            .bind(req.amount)
            .bind("completed")
            .execute(&mut *tx)
            .await?;
            tx.commit().await?;
            return Ok(HttpResponse::Created().finish());
        }
        Err(e) => return Err actix_web::error::ErrorInternalServerError(e),
    };
}

3. Use deterministic operations and avoid client-controlled timestamps for uniqueness

Do not rely solely on timestamps supplied by the client for uniqueness. Instead, derive uniqueness from the combination of user ID and an idempotency key, and let CockroachDB handle time via now(). This prevents attackers from shifting timestamps to bypass freshness checks.

-- Ensure idempotency key is provided by server for sensitive flows
-- Example: generate server-side key based on user ID, operation hash, and current minute bucket
INSERT INTO payments (transaction_id, user_id, amount, status)
VALUES (gen_idempotency_key($1, $2, $3), $1, $2, 'completed')
ON CONFLICT (transaction_id) DO NOTHING;

4. Verify state before acting

When processing replayed requests that are not pure inserts (e.g., state transitions), read the current state from CockroachDB and compare it before proceeding. Use serializable isolation to avoid write skew.

-- Pseudo-SQL for state verification
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT status FROM payments WHERE transaction_id = $1 FOR UPDATE;
-- Proceed only if status is not 'completed'

Frequently Asked Questions

Can a replay attack happen even if I use TLS and signed requests?
Yes. TLS prevents eavesdropping and tampering in transit, but it does not prevent an attacker from re-sending a valid, signed request. You still need application-level replay protection such as unique idempotency keys, nonces, or one-time tokens.
Does CockroachDB prevent replay attacks by default?
No. CockroachDB provides strong consistency and isolation, but it does not detect or reject duplicate transactions unless you enforce uniqueness constraints or perform explicit state checks in your application.