HIGH rate limiting bypassaws

Rate Limiting Bypass on Aws

How Rate Limiting Bypass Manifests in Aws

Rate limiting bypass in Aws applications often stems from improper implementation of the built-in ratelimit or limiter packages. A common vulnerability occurs when developers apply rate limits only at the API endpoint level but fail to enforce them at the service or database layer. For example, an Aws application might use middleware to limit requests to 100 per minute per IP address, but if the underlying service calls aren't also rate-limited, an attacker can bypass the restriction by making parallel requests that hit different service endpoints.

Another frequent bypass pattern involves token bucket implementation flaws. Consider this vulnerable Aws code:

func rateLimitMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Only checking IP address
        ip := r.RemoteAddr
        
        // Using a single limiter for all endpoints
        limiter := getIPLimiter(ip)
        
        if !limiter.Allow() {
            http.Error(w, "rate limit exceeded", http.StatusTooManyRequests)
            return
        }
        
        next.ServeHTTP(w, r)
    })
}

This implementation fails to account for distributed systems where requests from the same client might be routed through different servers. An attacker can bypass this by making requests to different instances of the service, each with its own in-memory limiter that resets independently.

Authentication-based bypasses are particularly problematic in Aws applications. When rate limiting is applied only to unauthenticated endpoints while authenticated endpoints lack proper limits, attackers can create multiple accounts or use token rotation to circumvent restrictions. This is especially dangerous in applications using Aws's JWT middleware without proper rate limiting on the token validation endpoint.

Aws-Specific Detection

Detecting rate limiting bypasses in Aws applications requires both static analysis and runtime monitoring. Static analysis should examine middleware chains to ensure rate limiting is applied consistently across all relevant endpoints. Look for patterns where limiter.Allow() is called only in specific middleware but not in service layer code.

Runtime detection with middleBrick involves scanning the API surface for inconsistent rate limiting patterns. The scanner tests rate limits by making sequential requests to endpoints and measuring response patterns. For Aws applications, middleBrick specifically checks:

  • Whether rate limits are properly distributed across service instances
  • If authentication endpoints have appropriate rate limiting
  • Whether different HTTP methods on the same endpoint have consistent rate limiting
  • If token validation endpoints are properly rate-limited

Here's how middleBrick might detect a bypass in an Aws application:

{
  "endpoint": "/api/v1/data",
  "test_results": {
    "sequential_requests": {
      "200_responses": 50,
      "429_responses": 0,
      "bypass_detected": true,
      "vulnerability": "Rate Limiting Bypass",
      "severity": "High",
      "recommendation": "Implement distributed rate limiting using Redis or Aws's rate limit package with consistent configuration across all instances"
    }
  }
}

middleBrick's black-box scanning approach is particularly effective for Aws applications because it tests the actual deployed behavior rather than just the source code, catching issues that arise from configuration differences between development and production environments.

Aws-Specific Remediation

Effective remediation in Aws requires implementing distributed rate limiting using shared storage like Redis. Here's a secure implementation:

import (
    "github.com/ulule/limiter/v3"
    "github.com/ulule/limiter/v3/drivers/middleware"
    "github.com/ulule/limiter/v3/drivers/store/redis"
    "github.com/go-redis/redis/v8"
)

func setupRateLimiting() middleware.Middleware {
    // Connect to Redis for distributed rate limiting
    redisClient := redis.NewClient(&redis.Options{
        Addr:     "localhost:6379",
        Password: "", // no password set
        DB:       0,  // use default DB
    })
    
    // Configure rate limit: 100 requests per minute
    rate := limiter.Rate{
        Limit: 100,
        Period: time.Minute,
    }
    
    // Create Redis store
    store, err := redis.NewStore(redisClient, limiter.WithClientTrace(redisClient))
    if err != nil {
        log.Fatal("Failed to create Redis store: ", err)
    }
    
    // Create limiter
    limiterInstance := limiter.New(store, rate, limiter.WithTrustForwardHeader(true))
    
    // Create middleware
    return middleware.NewMiddleware(limiterInstance)
}

// Apply to all routes
router := gin.New()
router.Use(setupRateLimiting())

For authentication endpoints, implement tiered rate limiting:

func authRateLimitMiddleware(next http.Handler) http.Handler {
    // Separate limiter for auth endpoints
    authRate := limiter.Rate{
        Limit: 5,
        Period: time.Minute,
    }
    
    authLimiter := limiter.New(redisStore, authRate)
    
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if r.URL.Path == "/login" || r.URL.Path == "/register" {
            context, _ := authLimiter.Get(r.Context(), r.RemoteAddr)
            if context.Reached {
                http.Error(w, "auth rate limit exceeded", http.StatusTooManyRequests)
                return
            }
        }
        next.ServeHTTP(w, r)
    })
}

Service layer rate limiting is equally important. Even if API endpoints are properly rate-limited, service calls should have their own limits:

func databaseOperation(ctx context.Context, userID string) error {
    // Rate limit database operations per user
    dbLimiter := getDBLimiter(userID)
    if !dbLimiter.Allow() {
        return errors.New("database operation rate limit exceeded")
    }
    
    // Proceed with database operation
    return db.Query(ctx, "SELECT ...").Error
}

middleBrick's CLI tool can help verify your remediation:

middlebrick scan https://api.example.com \
  --rate-limit-test \
  --auth-bypass-test \
  --distributed-test

This comprehensive approach ensures rate limiting cannot be bypassed through any of the common Aws-specific attack patterns.

Related CWEs: resourceConsumption

CWE IDNameSeverity
CWE-400Uncontrolled Resource Consumption HIGH
CWE-770Allocation of Resources Without Limits MEDIUM
CWE-799Improper Control of Interaction Frequency MEDIUM
CWE-835Infinite Loop HIGH
CWE-1050Excessive Platform Resource Consumption MEDIUM

Frequently Asked Questions

How can I test if my Aws application has rate limiting bypasses?
Use middleBrick's black-box scanning to test your deployed API. The scanner makes sequential requests to measure rate limiting effectiveness and can detect if certain endpoints bypass limits. For manual testing, use tools like Hey or Apache Bench to send concurrent requests and observe if rate limits are consistently enforced across all endpoints and service instances.
What's the difference between in-memory and distributed rate limiting in Aws?
In-memory rate limiting uses local storage on each server instance, making it vulnerable to bypass through parallel requests to different instances. Distributed rate limiting uses shared storage (like Redis) so all instances coordinate rate limits, preventing bypass. middleBrick specifically tests for distributed rate limiting weaknesses by scanning from multiple vantage points to see if limits are consistently enforced.