Rate Limiting Bypass in Feathersjs with Dynamodb
Rate Limiting Bypass in Feathersjs with Dynamodb — how this specific combination creates or exposes the vulnerability
FeathersJS is a popular framework for building REST and real-time APIs with minimal boilerplate. When paired with Amazon DynamoDB as the persistence layer, developers often rely on built-in service hooks and client-driven query patterns. If rate limiting is implemented only at the client level or through lightweight in-memory counters, the architecture can be bypassed in multi-instance or serverless deployments. DynamoDB’s high request throughput and low latency enable rapid request bursts, and without coordinated enforcement, a single compromised endpoint can be hammered without triggering protections.
The vulnerability arises when rate limiting is applied naively, for example by counting requests per user ID in application memory or by using uncoordinated per-instance limits. Because DynamoDB does not inherently enforce request-rate caps on a per-client basis, an attacker can open many concurrent connections or use distributed sources to evade threshold-based detection. In serverless environments, cold starts and instance sprawl mean that in-memory limits are not shared across executions, enabling an attacker to cycle through instances to circumvent throttling. Additionally, if the service exposes an unauthenticated endpoint (such as a public “create” action) and lacks validation on query parameters that map to DynamoDB queries, an attacker can amplify traffic by exploiting expensive scan or query operations that consume provisioned capacity while bypassing intended request ceilings.
Another vector specific to this stack involves pagination and filtering parameters that map directly to DynamoDB scan or query patterns. If a FeathersJS service allows unbounded limit values or does not enforce strict validation on filter expressions, an attacker can craft requests that force large read operations, effectively saturating throughput without tripping coarse-grained rate counters. Because DynamoDB consumes read capacity units heavily on scans, this can degrade performance for legitimate users while evading application-level limits that only inspect request counts rather than backend load. Coordinated detection across distributed instances and alignment with DynamoDB’s consumed capacity metrics is often absent in default FeathersJS setups, enabling stealthy bypass that does not trigger obvious failures but still abuses backend resources.
Dynamodb-Specific Remediation in Feathersjs — concrete code fixes
To harden a FeathersJS service backed by DynamoDB, implement rate limiting that is coordinated across instances and tightly coupled with backend cost signals. Use a token-bucket or sliding-window algorithm stored in a shared, low-latency data store, and validate limits before constructing DynamoDB requests. Enforce strict parameter validation on query and pagination inputs, and cap consumed read capacity by bounding limit values and rejecting scans where feasible. Instrument service hooks to inspect consumed capacity and integrate alerts when thresholds approach provisioned levels.
Below are concrete code examples for a FeathersJS service that mitigates rate limiting bypass when using DynamoDB.
const feathers = require('@feathersjs/feathers');
const express = require('@feathersjs/express');
const {DynamoDBDocumentClient, ScanCommand, QueryCommand} = require('@aws-sdk/lib-dynamodb');
const {DynamoDBClient} = require('@aws-sdk/client-dynamodb');
// Shared DynamoDB client
const ddbClient = new DynamoDBClient({region: 'us-east-1'});
const ddbDoc = DynamoDBDocumentClient.from(ddbClient);
const app = feathers();
const expressApp = express();
app.configure(expressApp);
// In-memory token bucket (replace with Redis or external store in production)
const RATE_LIMIT = 100; // requests per window
const WINDOW_MS = 60_000;
const tokens = new Map(); // key -> { count, lastSeen }
function allowRequest(key) {
const now = Date.now();
const entry = tokens.get(key) || { count: 0, lastSeen: now };
if (now - entry.lastSeen > WINDOW_MS) {
entry.count = 0;
entry.lastSeen = now;
}
if (entry.count >= RATE_LIMIT) return false;
entry.count += 1;
tokens.set(key, entry);
return true;
}
app.use('/items', {
async create(data, params) {
const identity = params.connection.remoteAddress;
if (!allowRequest(identity)) {
throw new Error('Rate limit exceeded');
}
// Validate and bound input to avoid expensive DynamoDB operations
const limit = Number(data.limit) || 10;
const boundedLimit = Math.min(limit, 50); // cap to protect backend
const command = new ScanCommand({
TableName: process.env.ITEMS_TABLE,
Limit: boundedLimit,
Select: 'SPECIFIC_ATTRIBUTES',
ProjectionExpression: 'id, name'
});
const response = await ddbDoc.send(command);
return response.Items || [];
},
async find(params) {
const identity = params.connection.remoteAddress;
if (!allowRequest(identity)) {
throw new Error('Rate limit exceeded');
}
const { query } = params;
const id = query.id;
const limit = Number(query.limit) || 10;
const boundedLimit = Math.min(limit, 50);
const command = new QueryCommand({
TableName: process.env.ITEMS_TABLE,
KeyConditionExpression: 'id = :id',
ExpressionAttributeValues: {
':id': id
},
Limit: boundedLimit
});
const response = await ddbDoc.send(command);
return response.Items || [];
}
});
// Optional: monitor consumed capacity and log warnings
app.hook('after', async context => {
// Inspect context.result and metrics if available
// Integrate with CloudWatch or custom metrics for capacity awareness
});
module.exports = app;
Related CWEs: resourceConsumption
| CWE ID | Name | Severity |
|---|---|---|
| CWE-400 | Uncontrolled Resource Consumption | HIGH |
| CWE-770 | Allocation of Resources Without Limits | MEDIUM |
| CWE-799 | Improper Control of Interaction Frequency | MEDIUM |
| CWE-835 | Infinite Loop | HIGH |
| CWE-1050 | Excessive Platform Resource Consumption | MEDIUM |