Distributed Denial Of Service in Dynamodb
How Distributed Denial Of Service Manifests in Dynamodb
Distributed Denial of Service (DDoS) attacks against DynamoDB exploit the service's scalability and pricing model to create denial of service conditions through legitimate API calls at massive scale. Unlike traditional DDoS attacks that overwhelm network bandwidth, DynamoDB DDoS focuses on exhausting provisioned throughput or incurring excessive costs.
The most common DynamoDB DDoS pattern involves Provisioned Throughput Exceeded errors. When an attacker floods your application with requests, DynamoDB returns HTTP 400 errors with ProvisionedThroughputExceededException. This isn't just a temporary slowdown—it can cause cascading failures across your entire application stack. The attack leverages DynamoDB's pay-per-request model: even if you have on-demand capacity, sustained high-volume attacks can generate thousands of dollars in unexpected charges within hours.
Another specific DynamoDB DDoS vector is Hot Partition Attacks. DynamoDB distributes data across partitions based on partition key values. An attacker who understands your data model can craft requests that all target the same partition key, creating a "hot spot" that becomes a bottleneck. This is particularly effective against applications using predictable partition keys like timestamps or sequential IDs.
Conditional Writes Amplification represents a sophisticated DDoS technique. By sending requests with complex conditional expressions that frequently fail, attackers force DynamoDB to evaluate expensive conditions on every request. Each failed conditional write still consumes read capacity, effectively turning write operations into read capacity consumption attacks.
The Batch Operations Abuse pattern involves sending massive BatchWriteItem or BatchGetItem requests. While these operations are convenient, they can be abused to submit hundreds of individual operations in a single API call. If your application doesn't properly validate batch sizes, an attacker can submit batches that approach the 16MB limit, forcing DynamoDB to process enormous payloads.
Finally, Global Secondary Index (GSI) Exhaustion attacks target the write capacity of GSIs. When you write to a table with GSIs, DynamoDB must also write to each index. An attacker can craft writes that trigger GSI updates across multiple indexes, multiplying the capacity consumed per operation. This becomes especially problematic when GSIs have different provisioned throughput settings than the base table.
Dynamodb-Specific Detection
Detecting DDoS attacks in DynamoDB requires monitoring specific CloudWatch metrics and implementing intelligent alerting. The primary indicator is ProvisionedReadCapacityUnits and ProvisionedWriteCapacityUnits consumed metrics. When these approach or exceed your provisioned capacity, you're experiencing either legitimate traffic spikes or an active attack.
Throttled Request Rate is another critical metric. Monitor ReadThrottleEvents and WriteThrottleEvents in CloudWatch. A sudden spike in throttle events, especially when correlated with increased request volume, indicates potential DDoS activity. The HTTPCodeClientErrors metric with 400 status codes provides additional confirmation.
For partition-level analysis, use ReadThrottleEvents and WriteThrottleEvents broken down by partition. This reveals hot partition attacks where specific partitions are being targeted. You can enable enhanced monitoring on your DynamoDB tables to get partition-level metrics, though this incurs additional costs.
Latency Analysis provides early warning signs. Monitor SuccessfulRequestLatency for your DynamoDB operations. DDoS attacks often cause latency to increase before throttling occurs, as the service struggles to handle the increased load. Set up alarms for latency exceeding your baseline by more than 2-3x.
Cost Monitoring serves as a financial DDoS detection mechanism. Set up CloudWatch alarms on your estimated charges for DynamoDB. While not immediate, unexpected cost spikes can indicate sustained high-volume attacks. The EstimatedCharges metric by service allows you to track DynamoDB-specific costs.
middleBrick's DynamoDB DDoS Detection scans your DynamoDB endpoints for several DDoS-related vulnerabilities. The scanner tests for missing rate limiting on DynamoDB operations, identifies exposed DynamoDB endpoints that lack authentication, and checks for insufficient input validation that could enable batch operation abuse. middleBrick's black-box scanning approach tests the actual API surface without requiring credentials, making it ideal for detecting exposed DynamoDB endpoints in your application.
The scanner specifically looks for Open DynamoDB Endpoints where API calls to DynamoDB operations are accessible without proper authentication. This is a critical finding because it allows attackers to directly target your DynamoDB tables without going through your application's business logic. middleBrick also tests for Insufficient Request Validation by attempting to submit abnormally large batch operations and complex conditional expressions to identify potential amplification vectors.
Dynamodb-Specific Remediation
Protecting DynamoDB from DDoS attacks requires a multi-layered approach combining AWS-native features, application-level controls, and architectural patterns. The first line of defense is Proper Capacity Management. For predictable workloads, use provisioned capacity with auto-scaling enabled. Configure your auto-scaling policy with appropriate target utilization (typically 70-80%) and minimum/maximum limits that align with your budget constraints.
For unpredictable workloads or DDoS protection, On-Demand Capacity provides automatic scaling but at higher per-request costs. While this doesn't prevent DDoS attacks, it ensures your application remains available during traffic spikes. Monitor your costs closely and set up billing alerts to avoid unexpected charges.
Rate Limiting Implementation is crucial for DDoS prevention. Implement API Gateway rate limiting in front of your DynamoDB operations. Here's an example using AWS CDK:
import { RateLimit } from '@aws-cdk/aws-apigateway';
const api = new apigateway.RestApi(this, 'MyApi');
const rateLimit = new RateLimit({
burstLimit: 100,
refillRate: 50
});
api.addResource('DynamoDBProxy').addMethod('POST', new apigateway.HttpIntegration('DYNAMODB_PROXY_URL'), {
methodOptions: {
rateLimit: rateLimit
}
});
Partition Key Design is a critical architectural consideration. Use high-cardinality partition keys that distribute writes evenly. For time-series data, consider using composite keys that include random elements or using the RandomPrefix strategy where you prepend a random string to your actual partition key. Here's an example:
const userId = 'user123';
const randomPrefix = Math.floor(Math.random() * 100).toString().padStart(2, '0');
const partitionKey = `${randomPrefix}#${userId}`;
await dynamoDb.put({
TableName: 'UserDataTable',
Item: {
pk: partitionKey,
sk: 'DATA',
data: userData
}
});
Request Validation and Throttling at the application layer prevents many DDoS vectors. Implement validation for batch operation sizes, reject requests with overly complex conditional expressions, and enforce reasonable limits on request payload sizes. Here's a middleware example:
const validateDynamoRequest = (req, res, next) => {
const { operation, params } = req.body;
// Validate batch sizes
if (operation === 'BatchWriteItem' || operation === 'BatchGetItem') {
const itemCount = Object.values(params.RequestItems || {}).flatMap(item => item).length;
if (itemCount > 25) {
return res.status(400).json({
error: 'Batch size exceeds maximum of 25 items'
});
}
}
// Validate conditional expression complexity
if (operation === 'PutItem' || operation === 'UpdateItem') {
const condition = params.ConditionExpression;
if (condition && condition.length > 200) {
return res.status(400).json({
error: 'Conditional expression too complex'
});
}
}
next();
};
Access Control and VPC Endpoints limit the attack surface. Use VPC endpoints for DynamoDB to prevent internet-based access, and implement IAM policies with least privilege. Here's an example IAM policy that restricts DynamoDB operations:
const dynamoPolicy = new iam.PolicyStatement({
effect: iam.Effect.DENY,
resources: [`arn:aws:dynamodb:${region}:${account}:table/${tableName}`],
actions: [
'dynamodb:BatchWriteItem',
'dynamodb:BatchGetItem'
]
});
// Allow only specific operations through your application
const allowedOperations = [
'dynamodb:GetItem',
'dynamodb:PutItem',
'dynamodb:UpdateItem',
'dynamodb:DeleteItem'
];
Monitoring and Automated Response completes your DDoS protection strategy. Set up CloudWatch alarms for throttling events and latency anomalies. Use AWS Lambda functions to automatically adjust provisioned capacity or temporarily block suspicious IP addresses when attack patterns are detected. Here's a simple Lambda function that responds to high throttling:
exports.handler = async (event) => {
const throttled = event.detail.throttledRequests || 0;
const threshold = 100; // Adjust based on your traffic patterns
if (throttled > threshold) {
// Scale up provisioned capacity
const dynamodb = new AWS.DynamoDB();
await dynamodb.updateTable({
TableName: 'YourTableName',
ProvisionedThroughput: {
ReadCapacityUnits: 1000,
WriteCapacityUnits: 1000
}
}).promise();
// Or trigger WAF IP blocking
const waf = new AWS.WAFV2();
await waf.createIPSet({
Name: 'SuspiciousIPs',
Scope: 'REGIONAL',
IPAddressVersion: 'IPV4',
Addresses: [event.detail.attackerIP]
}).promise();
}
};