HIGH rate limiting bypassexpress

Rate Limiting Bypass in Express

How Rate Limiting Bypass Manifests in Express

Rate limiting bypass in Express applications typically exploits weaknesses in how the middleware tracks client requests. The most common bypass technique involves IP address spoofing through proxy headers. When Express applications run behind reverse proxies like Nginx or load balancers, they often rely on x-forwarded-for headers to identify clients. Attackers can manipulate these headers to appear as different clients:

GET /api/users HTTP/1.1
Host: example.com
X-Forwarded-For: 1.2.3.4, 5.6.7.8, 9.10.11.12

Another Express-specific bypass occurs when applications use req.ip without proper proxy trust configuration. By default, Express trusts all proxy headers, allowing attackers to cycle through multiple IP addresses:

// Vulnerable: trusts all proxy headers
app.use(helmet());
app.use(rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100
}));

Token-based bypasses are particularly effective in Express JWT implementations. When rate limiting is applied only to unauthenticated endpoints, attackers can obtain a valid JWT token and make unlimited requests:

// Vulnerable pattern
app.use('/api/protected', authenticateJWT, rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100
}));

API key-based bypasses occur when applications rate limit by IP but accept API keys from headers or query parameters. Attackers can rotate through valid API keys obtained from public repositories or weak generation patterns:

// Vulnerable: rate limiting only by IP
app.use('/api/data', rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100
}));

Endpoint-specific bypasses exploit inconsistent rate limiting across similar endpoints. An application might rate limit /api/users but forget to protect /api/v2/users or administrative endpoints:

// Inconsistent protection
app.use('/api/users', rateLimit({ max: 100 }));
// Missing rate limit on admin endpoints
app.use('/api/admin', adminRouter);

Express-Specific Detection

Detecting rate limiting bypasses in Express requires examining both code patterns and runtime behavior. Start by reviewing middleware configuration and proxy trust settings:

// Check for vulnerable proxy trust
const trustedProxies = process.env.TRUSTED_PROXIES;
app.set('trust proxy', trustedProxies || false); // Should be specific IPs

Audit your rate limiting middleware for missing configurations:

// Vulnerable: missing store configuration
app.use(rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100 // Uses default MemoryStore (not suitable for production)
}));

middleBrick's Express-specific scanning identifies these bypass patterns by testing proxy header manipulation, token rotation attacks, and endpoint consistency. The scanner sends requests with varying X-Forwarded-For headers to detect if multiple IPs bypass limits:

GET /api/protected HTTP/1.1
X-Forwarded-For: 1.1.1.1, 2.2.2.2, 3.3.3.3
Authorization: Bearer valid.jwt.token

The scanner also tests for inconsistent rate limiting by mapping your API surface and identifying endpoints that should have protection but don't:

// What middleBrick detects:
// - /api/users has rate limiting
// - /api/v2/users (same functionality) has no protection
// - /api/admin (sensitive) has no rate limiting

Runtime monitoring with Express middleware can log suspicious patterns:

app.use((req, res, next) => {
  const ip = req.ip;
  const userAgent = req.get('User-Agent');
  const requestCount = getRecentRequestCount(ip, userAgent);
  
  if (requestCount > threshold) {
    console.warn(`Suspicious pattern: ${ip} - ${userAgent} - ${requestCount} requests`);
  }
  next();
});

Express-Specific Remediation

Express provides several native approaches to fix rate limiting bypasses. First, configure proxy trust correctly:

// Secure: trust only specific proxies
const trustedProxies = ['192.168.1.1', '10.0.0.1'];
app.set('trust proxy', trustedProxies);

// Or for cloud providers
app.set('trust proxy', 1); // Trust first proxy (Heroku, AWS ELB)

Use Redis or database-backed stores instead of MemoryStore for production:

const Redis = require('ioredis');
const client = new Redis(process.env.REDIS_URL);

app.use(rateLimit({
  store: new RedisStore({ client }),
  windowMs: 15 * 60 * 1000,
  max: 100,
  keyGenerator: (req) => {
    // Combine IP with user agent for better tracking
    return req.ip + ':' + req.get('User-Agent');
  }
}));

Implement consistent rate limiting across all endpoints using Express routers:

// Centralized rate limiting
const apiLimiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100,
  message: 'Too many requests from this IP'
});

// Apply to all API routes
app.use('/api/', apiLimiter);

// Additional limits for sensitive endpoints
const adminLimiter = rateLimit({
  windowMs: 10 * 60 * 1000,
  max: 10,
  skip: (req) => !req.user || !req.user.isAdmin
});

app.use('/api/admin', adminLimiter, adminRouter);

Rate limit by multiple factors to prevent token-based bypasses:

const multiFactorLimiter = rateLimit({
  store: new RedisStore({ client }),
  keyGenerator: (req) => {
    const ip = req.ip;
    const token = req.headers.authorization || '';
    const apiKey = req.headers['x-api-key'] || '';
    return `${ip}:${token}:${apiKey}`;
  },
  windowMs: 15 * 60 * 1000,
  max: 100
});

app.use('/api/', multiFactorLimiter);

For JWT-based APIs, combine rate limiting with token validation:

app.use('/api/protected', authenticateJWT, (req, res, next) => {
  // Check token age and request frequency
  const tokenAge = Date.now() - new Date(req.user.iat * 1000);
  if (tokenAge < 60000) { // Less than 1 minute old
    return rateLimit({ windowMs: 60000, max: 10 })(req, res, next);
  }
  next();
});

Related CWEs: resourceConsumption

CWE IDNameSeverity
CWE-400Uncontrolled Resource Consumption HIGH
CWE-770Allocation of Resources Without Limits MEDIUM
CWE-799Improper Control of Interaction Frequency MEDIUM
CWE-835Infinite Loop HIGH
CWE-1050Excessive Platform Resource Consumption MEDIUM

Frequently Asked Questions

How does middleBrick detect rate limiting bypasses in Express applications?
middleBrick tests Express applications by sending requests with manipulated proxy headers like X-Forwarded-For to check if multiple IP addresses bypass rate limits. It also maps your API endpoints to identify inconsistent protection patterns, such as rate limiting on /api/users but not on /api/v2/users or administrative endpoints. The scanner tests token rotation attacks by cycling through valid JWT tokens to see if authenticated endpoints have proper rate limiting.
What's the difference between MemoryStore and RedisStore for Express rate limiting?
MemoryStore (the default) stores rate limiting data in the Node.js process memory, making it unsuitable for production because it doesn't scale across multiple server instances and data is lost when the process restarts. RedisStore uses a centralized Redis database, allowing all server instances to share rate limiting state, providing persistence across restarts, and enabling distributed rate limiting across a server cluster. For production Express applications, always use RedisStore or another persistent store.