HIGH distributed denial of servicemutual tls

Distributed Denial Of Service with Mutual Tls

How Distributed Denial of Service Manifests in Mutual TLS

Mutual TLS (mTLS) adds a client‑certificate exchange to the standard TLS handshake. While this strengthens authentication, it also creates additional CPU‑ and memory‑intensive code paths that an attacker can abuse to exhaust server resources. The most common mTLS‑specific DDoS patterns are:

  • Handshake flood: The attacker opens many new TCP connections and initiates a full TLS handshake for each, forcing the server to perform expensive asymmetric cryptography (RSA/ECDSA) for every client certificate verification.
  • Client‑certificate validation exhaustion: If the server validates certificates against a large CRL or performs online OCSP checks without caching, each handshake triggers network I/O and CPU work, amplifying the cost per connection.
  • Session‑ticket or session‑id abuse: By presenting unique session identifiers or refusing to reuse tickets, the attacker prevents session resumption, ensuring every connection bears the full handshake cost.
  • Renegotiation loop: In older TLS versions, an attacker can trigger repeated renegotiations, causing the server to re‑verify client certificates many times within a single connection.

These attacks target the mutual‑TLS‑specific code paths: the certificate verification callback, the session‑ticket key lookup, and the OCSP/CRL validation routine. Because the server must complete the handshake before it can apply any application‑level rate limiting, the attack can saturate the TLS layer even when the application itself is idle.

Mutual TLS‑Specific Detection

Detecting an mTLS‑focused DDoS begins with observing symptoms that point to handshake exhaustion rather than application‑level overload:

  • Spikes in CPU usage proportional to new TCP connections, while request‑per‑second metrics stay low.
  • Increased memory consumption in the TLS library’s session cache or certificate verification structures.
  • Logs showing a high rate of "certificate verify failed" or "handshake timeout" messages from clients that never send application data.
  • Absence of session reuse: the server’s TLS statistics show a near‑zero session‑resumption ratio.

middleBrick’s unauthenticated black‑box scan includes a Rate Limiting check that probes the endpoint for missing connection‑or‑handshake throttling. When you run a scan, the tool attempts a rapid series of TLS handshakes (without sending application data) and measures whether the server responds with HTTP 429, closes connections, or shows degraded latency. If the server accepts an uncontrolled volume of handshakes, middleBrick flags a finding with severity high and provides remediation guidance.

Example CLI usage:

# Install the middleBrick CLI (npm)
npm i -g middlebrick
# Scan an mTLS‑protected API
middlebrick scan https://api.example.com --mtls

The --mtls flag tells the scanner to present a valid client certificate during the handshake, allowing it to test the exact code path that an attacker would target. The resulting report includes a breakdown of the rate‑limiting check, highlighting whether the server enforces limits on new TLS sessions or client‑certificate verification frequency.

Mutual TLS‑Specific Remediation

Mitigating mTLS‑focused DDoS relies on limiting the cost of each handshake and ensuring that legitimate clients can reuse session state. Apply these controls directly in the TLS configuration of your server or reverse proxy.

1. Limit concurrent handshakes

Most TLS libraries allow you to cap the number of in‑flight handshakes. Exceeding the limit causes the server to drop or delay new connections before expensive crypto work begins.

// Node.js (tls) – max concurrent handshakes = 100
const tls = require('tls');
const options = {
  key: fs.readFileSync('server.key'),
  cert: fs.readFileSync('server.crt'),
  requestCert: true,   // require client cert
  ca: fs.readFileSync('ca.crt'),
  // Reject new handshakes when the pool is full
  sessionIdContext: 'myapp',
  // Use a custom session cache with a size limit
  session: new tls.Session({ size: 100 })
};
const server = tls.createSecureContext(options);
// The underlying net.Server will respect the session cache size

2. Enable and size session ticket caching (TLS 1.3)

Session tickets let clients resume a session without repeating the full handshake. Configure a ticket key rotation scheme and a reasonable ticket lifetime.

// Go (crypto/tls) – enable tickets with a 4‑hour lifetime
config := &tls.Config{
  Certificates: []tls.Certificate{cert},
  ClientAuth:   tls.RequireAndVerifyClientCert,
  ClientCAs:    caPool,
  // Enable session tickets
  SessionTicketsDisabled: false,
  // Rotate keys every hour; keep two active keys
  TicketKeyCallback: func(*tls.Conn) ([32]byte, [32]byte, bool) {
    now := time.Now().Unix()
    if now%3600 < 1800 {
      return ticketKey1, ticketKey2, true
    }
    return ticketKey2, ticketKey1, true
  }
}

3. Cache client‑certificate validation results

Avoid performing OCSP/CRL checks on every handshake. Use a short‑lived cache (e.g., 5‑10 minutes) for revocation status.

// Pseudocode for a verification callback with caching
func verifyClientCert(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
    leaf := rawCerts[0]
    if cached, ok := revocationCache[string(leaf)]; ok {
        if !cached { return errors.New("revoked") }
        return nil
    }
    // Perform OCSP check once
    revoked, err := ocspCheck(leaf)
    revocationCache[string(leaf)] = !revoked
    revocationCacheTimer[string(leaf)] = time.AfterFunc(5*time.Minute, func() { delete(revocationCache, string(leaf)) })
    if err != nil || revoked { return errors.New("validation failed") }
    return nil
}

4. Deploy a reverse proxy with built‑in rate limiting

Place an mTLS‑terminating proxy (e.g., NGINX, Envoy) in front of your application. Configure connection‑level limits (e.g., limit_conn_zone in NGINX) so that the proxy absorbs the handshake flood before it reaches your backend.

# NGINX stream layer – limit new TLS handshakes to 50 per second
limit_conn_zone $binary_remote_addr zone=mtls:10m;
server {
    listen 443 ssl;
    ssl_certificate     /etc/nginx/server.crt;
    ssl_certificate_key /etc/nginx/server.key;
    ssl_client_certificate /etc/nginx/ca.crt;
    ssl_verify_client on;
    limit_conn mtls 50;
    # ... proxy_pass to upstream
}

By combining these controls—handshake concurrency caps, session reuse, cached revocation checks, and proxy‑based connection throttling—you raise the cost for an attacker to launch a successful mTLS‑DDoS while preserving low latency for legitimate clients.

Frequently Asked Questions

Can middleBrick stop a DDoS attack targeting a mutual TLS endpoint?
No. middleBrick is a detection‑only scanner. It reports whether the endpoint lacks sufficient handshake‑level rate limiting or session reuse controls, providing remediation guidance so you can configure your TLS stack or proxy to mitigate the risk.
How often should I rescan my mutual TLS‑protected API with middleBrick to catch emerging DDoS risks?
Run a scan whenever you change TLS configuration, update client‑certificate validation logic, or deploy a new version of the service. For continuous assurance, the middleBrick Pro plan offers scheduled scans (e.g., daily or weekly) that alert you if the rate‑limiting check regresses.