Cache Poisoning in Loopback with Mutual Tls
Cache Poisoning in Loopback with Mutual Tls — how this specific combination creates or exposes the vulnerability
Cache poisoning occurs when an attacker manipulates cached content so that subsequent requests receive malicious or incorrect data. In a Loopback application protected by mutual TLS (mTLS), the presence of mTLS primarily ensures that both client and server authenticate each other with certificates. This transport-layer assurance does not inherently prevent cache poisoning at the application or proxy layer, because the TLS session is established before HTTP interpretation occurs.
When a Loopback API is fronted by a caching layer (for example, an API gateway, CDN, or reverse proxy) that caches responses based on request attributes, mTLS does not automatically prevent the cache from using attacker-influenced inputs as cache keys. If the cache key includes elements derived from the request that are not strictly enforced by mTLS—such as headers, query parameters, or the request path—an attacker who can make authenticated requests with a valid client certificate may be able to poison entries that are later served to other users.
Consider a scenario where a Loopback endpoint accepts a custom header or a query parameter that influences the cache key but is not validated for safety. Even with mTLS ensuring the client possesses a trusted certificate, the caching layer may treat distinct keys as separate cache entries. An attacker could register a certificate, authenticate, and then submit crafted requests that cause the cache to store malicious content keyed by a controlled value. Later requests for the same cache key receive the poisoned response, which may disclose sensitive data or alter behavior.
Additionally, if the cache differentiates by Authorization header presence but not by its value, authenticated requests from different clients might inadvertently share cache entries when they should not. This becomes a confidentiality and integrity issue: one client’s cached response may be reused for another, bypassing expected isolation. Furthermore, if the mTLS configuration does not enforce strict certificate validation on the server side (for example, failing to verify client certificate revocation status or chain completeness), an attacker might present a compromised certificate to authenticate and then launch cache poisoning manipulations.
In summary, cache poisoning in a Loopback service with mutual TLS arises not from TLS itself, but from mismatches between the cache key definition and the security boundaries enforced by mTLS. Proper remediation requires explicit cache key normalization, strict header and parameter validation, and ensuring that mTLS is correctly configured to enforce client identity and revocation checks so that authenticated contexts remain isolated and trustworthy.
Mutual Tls-Specific Remediation in Loopback — concrete code fixes
To mitigate cache poisoning in Loopback while using mutual TLS, focus on precise cache key construction, strict header validation, and robust mTLS configuration. Ensure that cache keys exclude attacker-controllable inputs and incorporate verified identity information from the mTLS handshake rather than request-derived values that can be manipulated.
First, configure your caching layer to use a normalized key that relies on the verified client certificate subject or a stable identifier extracted during TLS handshake, avoiding inclusion of query strings or mutable headers. If you use a reverse proxy or gateway in front of Loopback, set cache rules that omit volatile or user-controlled headers from the key.
Second, harden your Loopback application to validate and sanitize any inputs that might indirectly affect caching behavior. Apply strict content negotiation and avoid using request parameters to determine cacheability when mTLS already provides identity assurance.
Below are example configurations for a Loopback application with mTLS enabled. The first shows a typical HTTPS server setup that requires client certificates:
const fs = require('fs');
const https = require('https');
const loopback = require('loopback');
const app = loopback();
const serverOptions = {
key: fs.readFileSync('server-key.pem'),
cert: fs.readFileSync('server-cert.pem'),
ca: fs.readFileSync('ca-cert.pem'),
requestCert: true,
rejectUnauthorized: true,
};
const server = https.createServer(serverOptions, app);
server.listen(8443, () => {
console.log('Loopback mTLS server listening on port 8443');
});
In this setup, rejectUnauthorized: true ensures that clients must present valid, trusted certificates. To further tighten identity-based cache controls, you can extract the verified client certificate details and use them in cache logic (e.g., via an Express middleware in Loopback):
app.use((req, res, next) => {
const cert = req.socket.getPeerCertificate && req.socket.getPeerCertificate();
if (cert && cert.subject) {
// Use a stable subject field or certificate fingerprint for cache key
req.verifiedIdentity = cert.subject.CN || cert.fingerprint;
} else {
req.verifiedIdentity = 'unknown';
}
next();
});
With this middleware in place, your caching logic can key entries on req.verifiedIdentity rather than on headers or query parameters that an attacker might influence. Combine this with cache-control directives that avoid storing sensitive or user-specific responses under shared keys, and you reduce the window for cache poisoning even when mTLS is in use.