HIGH cache poisoninghapimutual tls

Cache Poisoning in Hapi with Mutual Tls

Cache Poisoning in Hapi with Mutual Tls — how this specific combination creates or exposes the vulnerability

Cache poisoning occurs when an attacker manipulates cached responses so that subsequent users receive malicious or incorrect data. In Hapi, this typically arises when cache keys do not incorporate all request dimensions that affect the response, such as authentication material, request headers, or TLS session context. When Mutual TLS is used, the server authenticates the client certificate, but if the application builds cache keys from only the URL and query parameters, different clients with distinct certificates may inadvertently share cached responses.

For example, consider a Hapi endpoint that caches user profile data. If the cache key is derived solely from the request URL and query string, a response cached for a high-privilege user could be served to a low-privilege user presenting a different client certificate. MiddleBrick’s checks for BOLA/IDOR and Authentication detect scenarios where authorization is not properly bound to the cache key, including those arising from TLS-based client identification that is not reflected in caching logic.

Mutual TLS introduces additional complexity: the client certificate is often stored in request attributes (e.g., request.plugins.tls or a custom policy). If these attributes are omitted from the cache key, the cache becomes context-insensitive to the authenticated client. An attacker could probe endpoints that rely on stale cache entries, extracting information across client identities. This aligns with LLM/AI Security concerns when cached responses contain sensitive data that could be leaked through crafted prompts or output scanning; Data Exposure checks flag endpoints where sensitive payloads may be stored without proper context isolation.

Furthermore, cache poisoning in Hapi with Mutual TLS can be exacerbated by HTTP method and header normalization issues. If the cache layer treats GET and HEAD similarly or ignores certain headers that differentiate client context, the poisoned cache may persist across varied requests. Rate Limiting and Property Authorization checks help surface anomalous request patterns that suggest abuse of shared cache entries across distinct TLS-authenticated sessions.

To illustrate a vulnerable pattern, suppose a Hapi server uses a simple string concatenation for cache keys without including the client certificate fingerprint:

const cacheKey = 'profile:' + request.query.userId;

Here, the client certificate is authenticated by the server, but not used to segregate cache entries. MiddleBrick’s OpenAPI/Swagger spec analysis would flag this as a potential BOLA/IDOR issue when combined with unauthenticated LLM endpoint probing, since cached data might leak across authenticated contexts.

Mutual Tls-Specific Remediation in Hapi — concrete code fixes

To prevent cache poisoning when using Mutual TLS in Hapi, ensure the cache key incorporates elements that uniquely identify the authenticated client. This typically means including a representation of the client certificate or its fingerprint in the cache key. Below are concrete, working examples that integrate TLS client verification into Hapi caching logic.

First, configure Hapi with TLS options and extract the client certificate into a request extension:

const Hapi = require('@hapi/hapi');
const fs = require('fs');

const init = async () => {
  const server = Hapi.server({
    port: 443,
    tls: {
      key: fs.readFileSync('server-key.pem'),
      cert: fs.readFileSync('server-cert.pem'),
      ca: [fs.readFileSync('ca-cert.pem')],
      requestCert: true,
      rejectUnauthorized: true
    }
  });

  server.ext('onPreResponse', (request, h) => {
    const tlsPeer = request.info.app.tlsPeer || null;
    request.extensions.set('tlsPeer', tlsPeer);
    return h.continue;
  });

  server.auth.strategy('tls-auth', 'tls', {
    verify: {
      certs: true,
      allowUnknown: false
    },
    options: {
      trusted: true
    }
  });

  server.auth.default('tls-auth');
};

Next, build cache keys that include the client certificate fingerprint. Use a stable representation such as the SHA-256 hash of the certificate to avoid variability:

const crypto = require('crypto');

const getCacheKey = (request) => {
  const tlsPeer = request.extensions.get('tlsPeer');
  let clientFingerprint = 'none';
  if (tlsPeer && tlsPeer.cert) {
    clientFingerprint = crypto.createHash('sha256').update(tlsPeer.cert).digest('hex');
  }
  return `profile:${request.query.userId}:client:${clientFingerprint}`;
};

With this approach, each unique client certificate produces a distinct cache entry, effectively mitigating cross-client cache poisoning. Combine this with explicit validation of request headers and query parameters that influence the response, as flagged by Property Authorization checks. In production, pair this pattern with MiddleBrick’s CLI tool (middlebrick scan <url>) to continuously verify that your cache keys remain context-aware and that no BOLA/IDOR conditions persist across TLS sessions.

For teams using the Web Dashboard or GitHub Action, configure alerts to notify when high-risk findings appear related to cache behavior and authentication context. The MCP Server allows you to run scans directly from your IDE while developing Hapi routes, ensuring cache-aware security is considered early. Pricing tiers such as Pro support continuous monitoring to detect regressions in cache isolation over time.

Frequently Asked Questions

How does Mutual TLS affect cache key design in Hapi?
Mutual TLS binds the client certificate to the session. If cache keys ignore certificate context, responses cached for one client may be reused for another, causing cache poisoning. Include a stable fingerprint of the client certificate in the cache key to isolate responses per client.
Can MiddleBrick detect cache poisoning risks in Hapi with Mutual TLS?
Yes. MiddleBrick’s BOLA/IDOR and Authentication checks identify cases where authorization is not incorporated into caching logic, and its LLM/AI Security and Data Exposure checks highlight risks where cached sensitive data could be exposed across contexts.