HIGH denial of serviceexpressdynamodb

Denial Of Service in Express with Dynamodb

Denial Of Service in Express with Dynamodb — how this specific combination creates or exposes the vulnerability

When an Express service calls DynamoDB, several patterns can amplify availability risks. A common scenario is a hot partition key combined with unthrottled request bursts, causing consumed read/write capacity to spike and degrade responsiveness for legitimate traffic. A query without pagination or a scan on a large table can consume significant provisioned capacity, leading to throttling (ProvisionedThroughputExceededException) that may cascade into longer timeouts and thread exhaustion in the application layer.

Another vector is missing or weak client-side rate limiting. Without per-client or per-route limits, a single attacker can issue high-frequency queries or conditional writes that saturate DynamoDB capacity and trigger HTTP 500/400 errors that tie up Express request-handling resources. Inadequate error handling around ProvisionedThroughputExceededException or TransactionConflictException can also cause retries to multiply load, worsening contention.

DynamoDB errors such as ProvisionedThroughputExceededException and TransactionConflictException surface as 400/500 responses. If Express treats these as generic server errors without distinguishing retryable vs non-retryable failures, clients may retry aggressively, intensifying load. Long-running queries or scans that block event-loop-friendly async patterns can also delay responses, increasing latency and the chance of timeouts at load balancers or API gateways.

Because middleBrick scans the unauthenticated attack surface, it can detect missing rate limiting around DynamoDB-bound endpoints and surface high-severity findings for endpoints that lack per-client throttling or exhibit patterns prone to capacity exhaustion. This helps teams correlate configuration and code issues—such as missing pagination, unconditional scans, or retry storms—with observable availability risks.

Dynamodb-Specific Remediation in Express — concrete code fixes

Apply targeted mitigations at the Express layer and DynamoDB usage pattern to reduce availability impact. Use pagination, targeted queries, and conditional rate limiting; handle retryable errors with exponential backoff; and enforce sensible timeouts to prevent resource exhaustion.

Code example: Safe query with pagination and backoff

const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient({
  region: 'us-east-1',
  httpOptions: {
    timeout: 5000,
    maxRetries: 2
  }
});

// Express route with pagination and backoff
app.get('/items', async (req, res) => {
  const { lastKey, limit = 20 } = req.query;
  const safeLimit = Math.min(Math.max(Number(limit), 1), 50);
  const params = {
    TableName: process.env.DDB_TABLE,
    Limit: safeLimit
  };
  if (lastKey) {
    params.ExclusiveStartKey = JSON.parse(lastKey);
  }
  try {
    const data = await dynamodb.scan(params).promise();
    res.json({
      items: data.Items,
      lastKey: data.LastEvaluatedKey ? JSON.stringify(data.LastEvaluatedKey) : null
    });
  } catch (err) {
    if (err.code === 'ProvisionedThroughputExceededException' || err.code === 'TransactionConflictException') {
      // retry with backoff handled by SDK; surface a degraded response if persistent
      return res.status(503).json({ error: 'Service temporarily unavailable, please retry' });
    }
    res.status(500).json({ error: 'Internal server error' });
  }
});

Code example: Per-route rate limiting with Redis

const rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const redisClient = require('redis').createClient({ url: process.env.REDIS_URL });

const apiLimiter = rateLimit({
  store: new RedisStore({ client: redisClient }),
  windowMs: 60 * 1000,
  max: 100,
  keyGenerator: (req) => req.ip,
  handler: (req, res) => {
    res.status(429).json({ error: 'Too many requests' });
  }
});

app.use('/api/dynamodb', apiLimiter);

Best practices summary

  • Prefer query over scan; when scan is necessary, use pagination and project only required attributes
  • Set reasonable HTTP timeouts and maxRetries on the DynamoDB client to avoid hung connections
  • Handle ProvisionedThroughputExceededException and TransactionConflictException distinctly; use exponential backoff on the client and short-circuit excessive retries at the Express layer
  • Apply per-route and per-client rate limits; use Redis-backed stores for distributed enforcement
  • Monitor consumed capacity and CloudWatch throttling metrics; correlate with Express error rates to detect early availability degradation

Related CWEs: resourceConsumption

CWE IDNameSeverity
CWE-400Uncontrolled Resource Consumption HIGH
CWE-770Allocation of Resources Without Limits MEDIUM
CWE-799Improper Control of Interaction Frequency MEDIUM
CWE-835Infinite Loop HIGH
CWE-1050Excessive Platform Resource Consumption MEDIUM

Frequently Asked Questions

Can DynamoDB throttling alone trigger a Denial of Service finding in middleBrick scans?
Yes. middleBrick flags endpoints that interact with DynamoDB and show patterns likely to cause capacity exhaustion—such as scans without pagination or missing rate limiting—as high-severity availability findings.
Does middleBrick suggest specific retry/backoff configurations for DynamoDB in Express?
middleBrick provides remediation guidance, such as using exponential backoff and limiting retries, and recommends client-side safeguards like timeouts and rate limiting to reduce the likelihood of retry storms that amplify Denial of Service conditions.