HIGH distributed denial of servicefeathersjscockroachdb

Distributed Denial Of Service in Feathersjs with Cockroachdb

Distributed Denial Of Service in Feathersjs with Cockroachdb — how this specific combination creates or exposes the vulnerability

A DDoS scenario involving a FeathersJS service backed by CockroachDB typically arises from a combination of unthrottled application-level requests and database-side resource contention. FeathersJS is a framework that favors real-time features via WebSockets and can stream large result sets; without explicit limits, an attacker can open many concurrent connections or trigger computationally expensive hooks and service methods. CockroachDB, while horizontally scalable, still experiences contention under heavy write or read load from poorly constrained queries, long-running scans, or excessive retries caused by client-side logic. The interaction becomes critical when a single FeathersJS endpoint translates into heavy SQL work—such as unindexed lookups, complex joins, or full-table scans—across CockroachDB nodes, consuming node resources (CPU, memory, I/O) and potentially triggering request timeouts or backpressure that affects other clients.

Specific risk patterns include:

  • Unbounded find queries that pull large pages or stream indefinitely, causing CockroachDB to maintain many open leases and process spans.
  • Missing or misconfigured authentication/authorization hooks that allow unauthenticated or excessive calls, turning an unauthenticated endpoint into an easy DDoS vector.
  • Real-time channels (e.g., FeathersJS Channels) that push high-frequency updates backed by CockroachDB changefeeds, multiplying load when many clients subscribe.
  • Retries and exponential backoff on the client side during transient errors, causing amplified request bursts that CockroachDB must handle.

Because FeathersJS often exposes CRUD-style endpoints that map closely to CockroachDB tables, an attacker can craft requests that trigger full scans or heavy write loads. The framework’s flexibility means developers must explicitly enforce query constraints and rate controls; without them, the combination naturally exposes resource exhaustion risks.

Cockroachdb-Specific Remediation in Feathersjs — concrete code fixes

Mitigation focuses on constraining database interaction and hardening the FeathersJS service. Use strict pagination, query filters, and explicit hooks to limit payload size and rate. Ensure indexes exist for common filter fields to avoid full scans. Apply service-level rate limiting and validate inputs early to reduce abusive load on CockroachDB.

1) Pagination and query constraints

Enforce server-side pagination and maximum page size to prevent large scans. In your FeathersJS service definition, configure the paginate hook and validate query parameters.

// src/hooks/pagination.js
module.exports = {
  before: {
    async find(context) {
      const { params } = context;
      const pagination = params.pagination !== false ? params.paginate : false;
      const maxLimit = 100;
      const query = params.query || {};

      // Enforce a reasonable default and cap
      const $limit = Math.min(Number(query.$limit) || 20, maxLimit);
      const $skip = Number(query.$skip) || 0;

      // Ensure an index exists for commonly filtered/sorted fields
      // e.g., CREATE INDEX idx_users_status ON users (status);
      if (query.status && !query.$sort) {
        query.$sort = { createdAt: 1 };
      }

      context.params.query = {
        ...query,
        $limit,
        $skip,
      };
      return context;
    }
  }
};

Apply this hook to services that map to CockroachDB tables. The index on status (or your filter/sort field) avoids full-table scans, reducing CPU and I/O on CockroachDB nodes.

2) Rate limiting at the FeathersJS layer

Use a lightweight rate limiter to throttle requests per identity or IP before they reach CockroachDB. Below is an in-memory example; in production, use a shared store (e.g., Redis) if you have multiple nodes.

// src/hooks/rate-limit.js
const bucket = new Map();

module.exports = function rateLimit(options = {}) {
  const { max = 60, windowMs = 60_000 } = options;
  return function rateLimitHook(context) {
    const { connection } = context.params;
    // Use IP or user ID when available
    const key = (connection && connection.remoteAddress) || 'anonymous';
    const now = Date.now();
    const entry = bucket.get(key) || { count: 0, resetAt: now + windowMs };

    if (now > entry.resetAt) {
      entry.count = 0;
      entry.resetAt = now + windowMs;
    }
    entry.count += 1;

    if (entry.count > max) {
      throw new context.app.errors.HttpError(429, 'Too Many Requests');
    }
    bucket.set(key, entry);
    return context;
  };
};

Register this hook globally or on heavy endpoints to smooth request bursts and protect CockroachDB from sudden traffic spikes.

3) Real-time channel controls

If you use FeathersJS Channels with CockroachDB changefeeds or heavy find calls, limit the event frequency and scope each subscription can request.

// src/channels/realtime.js
const { Channel } = require('feathers-socketio');

class SafeChannel extends Channel {
  async notify(event, data) {
    // Throttle high-frequency events and validate payload size
    if (event === 'patched' && data && Object.keys(data).length === 0) {
      return; // ignore no-op patches
    }
    super.notify(event, data);
  }
}

module.exports = function setupChannels(app) {
  const channel = new SafeChannel('events', {
    async created(hook) {
      // Optionally filter events to avoid heavy queries on connect
      return hook;
    }
  });
  app.channel('events').join(channel);
};

4) Input validation and early rejection

Reject malformed or excessively large payloads before they touch CockroachDB. Use hook validators to enforce constraints on create/update payloads.

// src/hooks/validate.js
const { BadRequest } = require('@feathersjs/errors');

module.exports = function validatePayload(maxSize = 1000) {
  return function validateHook(context) {
    const data = context.data;
    if (typeof data === 'object') {
      const size = JSON.stringify(data).length;
      if (size > maxSize) {
        throw new BadRequest(`Payload exceeds ${maxSize} bytes`);
      }
    }
    return context;
  };
};

Combine these hooks in your service configuration to ensure each request is bounded and CockroachDB load remains predictable.

5) Indexing and query review

Ensure CockroachDB indexes support your FeathersJS query patterns. For a users service, create indexes that align with common $sort and $filter fields.

-- Example CockroachDB SQL
CREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_posts_user_status ON posts (user_id, status);

With proper indexes, FeathersJS queries translate into efficient index scans rather than full-table scans, reducing CockroachDB contention during peak traffic.

Frequently Asked Questions

Can FeathersJS hooks prevent DDoS when using CockroachDB?
Yes, hooks can enforce pagination, rate limiting, input validation, and query constraints to reduce abusive load on CockroachDB. Combine hooks with database indexes for best results.
Is it enough to rely on CockroachDB’s built-in concurrency controls to mitigate DDoS?
No. CockroachDB handles concurrency well, but application-layer controls in FeathersJS—such as request throttling and bounded queries—are necessary to limit resource consumption and prevent amplified load from unthrottled endpoints.