Rate Limiting Bypass in Actix
How Rate Limiting Bypass Manifests in Actix
Rate limiting bypass in Actix applications typically occurs when the rate limiting implementation fails to account for certain request characteristics or when attackers exploit gaps in the middleware chain. Actix's async-first architecture creates specific attack vectors that differ from traditional synchronous frameworks.
One common bypass pattern involves exploiting the timing between Actix's middleware execution and the actual handler. Consider this vulnerable pattern:
async fn my_handler(req: HttpRequest) -> impl Responder {
let ip = req.peer_addr().unwrap().ip();
// Rate limiting check happens AFTER expensive operation
let result = expensive_db_operation().await;
// Rate limit check only at the end
if is_rate_limited(ip) {
return HttpResponse::TooManyRequests().finish();
}
HttpResponse::Ok().body(result)
}
The bypass here is straightforward: attackers can trigger the expensive operation repeatedly before the rate limit check occurs, potentially causing resource exhaustion even when rate limits are technically enforced.
Another Actix-specific bypass involves the framework's handling of HTTP/2 multiplexing. Actix supports multiple requests over a single connection, and naive rate limiting that only tracks connections misses this:
// Vulnerable: only tracks connections, not requests
async fn connection_limited_handler(req: HttpRequest) -> impl Responder {
let conn = req.connection_info();
// This only limits by connection, not by client
if is_connection_rate_limited(conn) {
return HttpResponse::TooManyRequests().finish();
}
// Multiple HTTP/2 requests can bypass this check
process_request().await
}
Header manipulation also creates bypass opportunities. Actix's extractors can be used to bypass simplistic rate limiting:
// Vulnerable to header spoofing
async fn vulnerable_rate_limit(req: HttpRequest) -> impl Responder {
let client_id = req.headers().get("X-Client-ID").unwrap_or_default();
// Attacker can set any X-Client-ID they want
if is_rate_limited(client_id) {
return HttpResponse::TooManyRequests().finish();
}
process_request().await
}
The framework's async nature also enables timing-based bypasses. If rate limiting uses in-memory counters without proper synchronization across async tasks, race conditions can occur:
// Race condition vulnerable
async fn race_condition_vulnerable(req: HttpRequest) -> impl Responder {
let ip = req.peer_addr().unwrap().ip();
let current_count = get_request_count(ip); // Not atomic
if current_count >= MAX_REQUESTS {
return HttpResponse::TooManyRequests().finish();
}
increment_request_count(ip); // Another task could increment first
process_request().await
}
Actix-Specific Detection
Detecting rate limiting bypasses in Actix requires understanding both the framework's execution model and common bypass patterns. middleBrick's scanner specifically targets Actix applications by examining the request flow and middleware chain.
The scanner analyzes Actix's middleware stack to identify where rate limiting is applied. In Actix, middleware executes in a specific order, and bypass vulnerabilities often occur when rate limiting middleware isn't positioned correctly:
// middleBrick detects this pattern as HIGH risk
app.middleware(MyRateLimitMiddleware::new());
app.middleware(MyAuthMiddleware::new());
// Rate limiting should come AFTER authentication
The scanner also tests for timing-based bypasses by sending rapid bursts of requests and analyzing response patterns. Actix's async execution means that requests from the same client can be processed concurrently, potentially overwhelming rate limiters that don't account for this:
middleBrick specifically tests:
- Concurrent request handling across multiple tasks
- HTTP/2 multiplexing support and its impact on rate limiting
- Header manipulation attempts for client identification
- Timing attacks that exploit async execution delays
- Resource exhaustion before rate limit checks
For Actix applications using extractors, middleBrick tests whether the rate limiting occurs before or after data extraction, as this ordering can create bypasses:
// middleBrick flags this as vulnerable
async fn extractor_vulnerable(
json_body: Json,
req: HttpRequest
) -> impl Responder {
// Rate limiting here is too late - body already parsed
if is_rate_limited(req.peer_addr().unwrap().ip()) {
return HttpResponse::TooManyRequests().finish();
}
process_request(json_body).await
}
The scanner also examines Actix's error handling patterns. If rate limiting returns errors that aren't properly handled by the async runtime, it can create bypass opportunities:
// middleBrick detects improper error handling
async fn error_bypass(
req: HttpRequest
) -> Result<impl Responder, Error> {
// If rate limit check errors, execution continues
let _ = check_rate_limit(req.peer_addr().unwrap().ip())?;
// Bypass possible if check fails silently
process_request().await
}
Actix-Specific Remediation
Fixing rate limiting bypasses in Actix requires leveraging the framework's native capabilities and following async-safe patterns. The key is to implement rate limiting as early as possible in the request lifecycle and use Actix's middleware system correctly.
Here's a robust Actix rate limiting middleware that prevents common bypasses:
use actix_web::{dev::Service, dev::ServiceRequest, dev::ServiceResponse, HttpMessage};
use actix_web::middleware::Finished};
use actix_http::http::HeaderName;
use std::sync::Arc;
use tokio::sync::Mutex;
use std::collections::HashMap;
use std::time::Duration;
pub struct RateLimitMiddleware {
store: Arc>>,
max_requests: usize,
window: Duration,
}
#[derive(Default)]
struct RequestCounter {
count: usize,
reset_time: std::time::Instant,
}
impl RateLimitMiddleware {
pub fn new(max_requests: usize, window: Duration) -> Self {
Self {
store: Arc::new(Mutex::new(HashMap::new())),
max_requests,
window,
}
}
async fn check_rate_limit(
&self,
client_key: String,
) -> Result<(), actix_web::Error> {
let mut store = self.store.lock().await;
let now = std::time::Instant::now();
let counter = store.entry(client_key.clone()).or_insert({
RequestCounter {
count: 0,
reset_time: now + self.window,
}
});
// Reset counter if window expired
if now >= counter.reset_time {
counter.count = 0;
counter.reset_time = now + self.window;
}
if counter.count >= self.max_requests {
return Err(actix_web::error::ErrorTooManyRequests(
"Rate limit exceeded"
));
}
counter.count += 1;
Ok(())
}
}
impl actix_web::middleware::Middleware for RateLimitMiddleware
where
S: Service>,
S::Future: 'static,
{
type Response = ServiceResponse;
type Error = S::Error;
type InitError = ();
type Future = actix_web::dev::Ready<Result<Self, Self::InitError>>;
fn new_service(&self, _: &Self::Service) -> Self::Future {
actix_web::dev::Ready::Ok(self.clone())
}
fn call(&self, req: ServiceRequest, srv: &S) -> Self::Future {
let client_key = self.generate_client_key(&req).await;
let middleware = self.clone();
actix_web::dev::forward_ready(srv).and_then(move |srv_ready| {
async move {
// Check rate limit BEFORE processing the request
middleware.check_rate_limit(client_key.clone()).await?;
// Proceed with request processing
let res = srv.call(req).await;
// For logging/metrics, you could track here
Ok(res)
}
})
}
}
impl RateLimitMiddleware {
async fn generate_client_key(&self, req: &ServiceRequest) -> String {
// Use multiple identifiers to prevent bypass
let peer_ip = req.connection_info().realip_remote_addr().unwrap_or("unknown");
let user_agent = req.headers().get(actix_http::http::header::USER_AGENT)
.map(|h| h.to_str().unwrap_or("unknown"))
.unwrap_or("unknown");
format!("{}:{}:{}", peer_ip, user_agent, req.path())
}
}
// Usage in Actix app
let rate_limiter = RateLimitMiddleware::new(100, Duration::from_secs(60));
app.middleware(rate_limiter);
For Actix applications that need more sophisticated rate limiting, you can integrate with Redis for distributed rate limiting:
use actix_redis::RedisActor;
use actix::Addr;
use tokio::sync::Mutex;
pub struct DistributedRateLimitMiddleware {
redis: Addr,
max_requests: usize,
window: Duration,
}
impl DistributedRateLimitMiddleware {
pub fn new(redis: Addr, max_requests: usize, window: Duration) -> Self {
Self { redis, max_requests, window }
}
async fn check_rate_limit(
&self,
client_key: &str,
) -> Result<(), actix_web::Error> {
let window_seconds = self.window.as_secs() as usize;
let current = self.redis
.send(redis::cmd("INCR")
.arg(format!("rate_limit:{}", client_key)))
.await?;
if current == 1 {
// First request, set expiration
self.redis.send(
redis::cmd("EXPIRE")
.arg(format!("rate_limit:{}", client_key))
.arg(window_seconds)
).await?;
}
if current > self.max_requests as isize {
return Err(actix_web::error::ErrorTooManyRequests(
"Rate limit exceeded"
));
}
Ok(())
}
}
The key remediation principles for Actix are:
- Apply rate limiting as early as possible in the middleware chain
- Use atomic operations or proper async synchronization
- Account for HTTP/2 multiplexing by tracking requests, not just connections
- Combine multiple identifiers (IP, User-Agent, path) to prevent header spoofing
- Use distributed storage for rate limiting in clustered deployments
- Test with concurrent requests to verify race conditions are handled
Related CWEs: resourceConsumption
| CWE ID | Name | Severity |
|---|---|---|
| CWE-400 | Uncontrolled Resource Consumption | HIGH |
| CWE-770 | Allocation of Resources Without Limits | MEDIUM |
| CWE-799 | Improper Control of Interaction Frequency | MEDIUM |
| CWE-835 | Infinite Loop | HIGH |
| CWE-1050 | Excessive Platform Resource Consumption | MEDIUM |