Denial Of Service in Actix with Cockroachdb
Denial Of Service in Actix with Cockroachdb — how this specific combination creates or exposes the vulnerability
When an Actix web service depends on CockroachDB, DoS risks arise from a mismatch between request concurrency in Actix and CockroachDB’s resource characteristics. CockroachDB is a distributed SQL database that supports high concurrency but can become a bottleneck under uncontrolled load, long-running queries, or connection pressure. In an Actix application, each incoming HTTP request that issues a database call can lead to resource exhaustion if queries are unbounded, if sessions are not pooled properly, or if retries aggressively multiply load.
Specifically, unbounded read or write operations in Actix handlers can generate heavy CockroachDB workloads. For example, a handler that iterates over large result sets without pagination streams many rows to the client while holding a database connection or session open. If many clients issue such requests concurrently, CockroachDB may exhaust available connections or I/O capacity, causing increased latency or failed requests that manifest as a DoS condition for the API.
Another vector is schema and query design. Without proper indexes, CockroachDB may perform full table scans; in Actix, if these queries are triggered per request, the cluster can experience high CPU and disk usage. Additionally, long-running transactions in Actix that interact with CockroachDB can hold resources and block other operations, reducing throughput. Network and client-side retry storms can amplify this: if Actix services or clients retry failed requests aggressively during transient CockroachDB backpressure, the combined load can push the system into a sustained degraded state.
OpenAPI/Swagger analysis can highlight endpoints with heavy payload expectations or missing pagination, which correlates with potential DoS surfaces. middleBrick’s checks for Rate Limiting and Input Validation can flag endpoints where large or uncontrolled requests interact with database operations, helping to identify DoS-prone designs before deployment.
Cockroachdb-Specific Remediation in Actix — concrete code fixes
Mitigate DoS in Actix with CockroachDB by controlling concurrency, bounding query cost, and isolating database load from request paths. Use connection pooling with sensible limits, enforce pagination, and apply timeouts and circuit-breaking patterns at the Actix level to prevent resource exhaustion.
1. Bounded connection pool and timeouts
Configure the database pool to prevent too many open connections to CockroachDB. In Actix, this is typically done via the configured pool used by your handler via web::Data.
use actix_web::{web, App, HttpServer, Responder};
use sqlx::postgres::PgPoolOptions;
use std::time::Duration;
async fn health() -> impl Responder {
"OK"
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
// Bounded pool with timeouts to avoid resource exhaustion
let pool = PgPoolOptions::new()
.max_connections(20) // limit concurrent DB connections
.acquire_timeout(Duration::from_secs(2))
.idle_timeout(Duration::from_secs(30))
.connect(&std::env::var("DATABASE_URL").expect("DATABASE_URL must be set"))
.await
.expect("Failed to create pool");
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(pool.clone()))
.route("/health", web::get().to(health))
// register API routes that use pool
})
.bind("127.0.0.1:8080")?
.run()
.await
}
2. Mandatory pagination for list endpoints
Avoid unbounded scans by requiring limit/offset or cursor pagination in queries. This reduces CockroachDB scan volume per request and prevents large in-memory transfers in Actix.
use sqlx::postgres::types::Json;
use actix_web::{web, HttpResponse};
#[derive(serde::Deserialize)]
struct ListItems {
limit: Option,
cursor: Option,
}
async fn list_items(
pool: web::Data<sqlx::PgPool>,
web::Query(params): web::Query<ListItems>,
) -> actix_web::Result<impl Responder> {
let limit = params.limit.unwrap_or(50).min(500); // enforce caps
let cursor = params.cursor.unwrap_or(0);
// CockroachDB-safe paginated query
let rows = sqlx::query(
"SELECT id, name FROM items WHERE id > $1 ORDER BY id ASC LIMIT $2",
)
.bind(cursor)
.bind(limit)
.fetch_all(pool.as_ref())
.await?;
// Process rows and return lightweight response
Ok(HttpResponse::Ok().json(Json(rows)))
}
3. Query timeouts and cancellation
Set per-query timeouts so that a slow CockroachDB statement does not hold Actix workers indefinitely. Use tokio::time::timeout or rely on CockroachDB statement timeout via session settings.
use sqlx::postgres::PgPool;
use std::time::Duration;
async fn safe_query(pool: &PgPool) -> Result<(), sqlx::Error> {
let query = sqlx::query("SELECT status FROM reports WHERE id = $1")
.bind(12345);
// Apply a query timeout at the driver level if supported, or wrap in tokio::time::timeout
let result = tokio::time::timeout(
Duration::from_secs(5),
query.fetch_optional(pool),
)
.await;
match result {
Ok(Ok(Some(row))) => { /* process */ Ok(()) }
Ok(Ok(None)) => Ok(()),
Ok(Err(e)) => Err(e),
Err(_) => Err(sqlx::Error::RowNotFound), // treat timeout as not found or handle separately
}
}
4. Circuit breaker and retry backoff
Introduce lightweight failure isolation. If CockroachDB becomes slow or unavailable, short-circuit failing calls to avoid thread exhaustion in Actix. Use exponential backoff with jitter on retries and cap retry counts to avoid amplification.
use backoff::{backoff_async::{ExponentialBackoff, SystemClock}, Error as BackoffError};
use sqlx::postgres::PgPool;
use std::time::Duration;
async fn resilient_query(pool: &PgPool) -> Result<(), Box<dyn std::error::Error>> {
let mut backoff = ExponentialBackoff {
max_elapsed_time: Some(Duration::from_secs(30)),
..ExponentialBackoff::default()
};
SystemClock::sleep_and_step(&mut backoff).await;
let result = sqlx::query("INSERT INTO events (data) VALUES ($1)")
.bind("sample")
.execute(pool)
.await;
match result {
Ok(_) => Ok(()),
Err(e) => {
// decide when to retry vs fail fast
Err(Box::new(e))
}
}
}
5. Schema and index discipline
Ensure CockroachDB tables have appropriate indexes for common filters and join keys used in Actix queries. Missing indexes lead to full table scans that increase CPU and I/O, raising the risk of DoS under load.
-- Example index to avoid full scans in Actix queries
CREATE INDEX IF NOT EXISTS idx_items_id ON items (id);
CREATE INDEX IF NOT EXISTS idx_events_created_at ON events USING BTREE (created_at);
Use EXPLAIN (VERBOSE, ANALYZE) in CockroachDB to validate that Actix-generated queries use the intended indexes and do not spill to disk or cause excessive retries.
Related CWEs: resourceConsumption
| CWE ID | Name | Severity |
|---|---|---|
| CWE-400 | Uncontrolled Resource Consumption | HIGH |
| CWE-770 | Allocation of Resources Without Limits | MEDIUM |
| CWE-799 | Improper Control of Interaction Frequency | MEDIUM |
| CWE-835 | Infinite Loop | HIGH |
| CWE-1050 | Excessive Platform Resource Consumption | MEDIUM |