HIGH cache poisoninghanamimutual tls

Cache Poisoning in Hanami with Mutual Tls

Cache Poisoning in Hanami with Mutual Tls — how this specific combination creates or exposes the vulnerability

Cache poisoning in Hanami occurs when an attacker manipulates cache key generation so that a malicious response is stored and subsequently served to other users. When mutual TLS (mTLS) is used, the assumption is that only authenticated, authorized clients can reach the application. However, mTLS secures the transport and client identity, not application logic such as how cache keys are built or how varied responses are derived from request properties.

In Hanami, caching is often implemented at the gateway, reverse proxy, or within application-level stores (e.g., Redis). If the cache key includes elements that mTLS does not protect—such as specific headers, query parameters, or user identifiers extracted after TLS termination—the system may cache a user-specific or role-specific response under a key that other authenticated clients inadvertently reuse. For example, a response keyed by an X-User-Role header that an attacker can influence could cause an admin response to be cached under a public key, exposing data across mTLS-authenticated sessions.

Even with mTLS, if Hanami services cache responses based on incomplete request dimensions (omitting certificate-derived attributes or including attacker-controlled inputs), the poisoned cache entry can be served to other valid clients. mTLS ensures the client presents a valid certificate, but it does not prevent a valid client from requesting a path that yields a response that should not be shared. The scanner’s BOLA/IDOR and Input Validation checks can surface these logic flaws by probing how caching behavior differs across authenticated identities and manipulated parameters.

Real-world attack patterns mirror OWASP API Top 10 #4 (Insecure Design) and can be exemplified by CVE-2023-44487 (HTTP/2 Rapid Reset) where resource exhaustion and cache confusion intersect. In Hanami, an unvalidated query parameter like ?lang=en could cause a cached response intended for one language to be served to another, despite mTLS client authentication. Data Exposure checks in middleBrick can surface such gaps by comparing cache-varied responses across authenticated contexts.

Because mTLS terminates before application logic, developers must ensure cache keys incorporate certificate-bound identifiers or reject inputs that should not affect cache differentiation. middleBrick’s OpenAPI/Swagger analysis with full $ref resolution can help identify which request dimensions are used for caching, while its LLM/AI Security checks ensure prompts and outputs do not inadvertently leak cache-sensitive data.

Mutual Tls-Specific Remediation in Hanami — concrete code fixes

To mitigate cache poisoning when using mutual TLS in Hanami, ensure cache keys include mTLS-derived attributes and exclude attacker-influenced inputs. Hanami applications typically use a reverse proxy or gateway to handle mTLS, passing verified client details via headers. Use these headers explicitly in cache key construction and validate them server-side.

Example Hanami controller with safe cache usage:

require "hanami/controller"
require "openssl"

class ArticlesController < Hanami::Controller
  def show
    # Assume mTLS terminates at the gateway; the client certificate fingerprint
    # is passed in X-Client-Cert-Fingerprint and validated by the gateway.
    fingerprint = request.env["HTTP_X_CLIENT_CERT_FINGERPRINT"]
    raise "Unauthorized" unless fingerprint&.match?(Digest::SHA256.hexdigest_pattern)

    # Build a cache key that includes the fingerprint and a sanitized article_id.
    article_id = params.fetch(:id) { |e| raise "missing id" }
    cache_key = ["v1", "article", article_id, fingerprint].join(":")

    cache.fetch(cache_key, expires_in: 3600) do
      # Fetch from repository; repository must not expose user-specific fields to public cache.
      ArticleRepository.new.find(article_id).to_h
    end
  end
end

Example gateway configuration (e.g., nginx) to enforce mTLS and pass verified headers:

server {
  listen 443 ssl;
  ssl_certificate /etc/ssl/certs/server.crt;
  ssl_certificate_key /etc/ssl/private/server.key;

  # Require client certificates
  ssl_client_certificate /etc/ssl/certs/ca.crt;
  ssl_verify_client on;

  location /api {
    # Pass the verified fingerprint to the app
    proxy_set_header X-Client-Cert-Fingerprint $ssl_client_s_dn;
    proxy_pass http://hanami_app;
  }
}

In the Hanami app, ensure that inputs that should not affect cache differentiation are excluded. For instance, avoid using query parameters or non-validated headers directly in cache keys. middleBrick’s CLI tool can be used to scan from terminal with middlebrick scan <url>, producing a per-category breakdown that highlights BOLA/IDOR and Input Validation findings specific to cache logic. The Pro plan adds continuous monitoring to detect regressions in cache behavior across deployments, while the GitHub Action can fail builds if a new endpoint introduces risky cache key composition.

Remediation also involves validating that cached responses do not contain user-specific data when served under shared keys. Use the dashboard to track scores over time and ensure that changes to cache configuration do not degrade security. For compliance mappings, findings can be aligned with OWASP API Top 10 and PCI-DSS requirements relevant to caching and data segregation.

Frequently Asked Questions

Does mutual TLS prevent cache poisoning in Hanami?
No. Mutual TLS authenticates clients and encrypts traffic, but it does not protect against application-level cache key manipulation. If cache keys include attacker-influenced inputs or omit certificate-bound identifiers, poisoning can still occur.
How can I verify my cache keys are safe across mTLS clients?
Use middleBrick’s scanner to analyze endpoint behavior across authenticated contexts. The CLI (middlebrick scan <url>) and GitHub Action can highlight BOLA/IDOR and Input Validation findings; the Dashboard tracks scoring over time to catch regressions in cache logic.