Cache Poisoning with Bearer Tokens
How Cache Poisoning Manifests in Bearer Tokens
Cache poisoning in Bearer Token implementations occurs when an attacker manipulates cached authentication responses to gain unauthorized access or escalate privileges. This vulnerability typically manifests through several specific attack vectors unique to token-based authentication systems.
The most common Bearer Token cache poisoning pattern involves response splitting attacks where an attacker injects CRLF sequences into token payloads. When the server caches these responses without proper sanitization, subsequent requests receive poisoned cache entries that bypass authentication checks. For example:
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6IiAgIiAiaXAiOiIxMjcuMC4wLjEiLCJyb2xlIjpbInVzZXIiXX0.signatureIn this payload, the injected whitespace creates header injection opportunities when cached responses are served to other users. The cache serves the poisoned response with modified user context, effectively granting the attacker the victim's privileges.
Another manifestation involves token replay through shared cache storage. When Bearer Tokens are cached without proper isolation, an attacker who gains access to the cache storage can extract valid tokens and replay them. This is particularly dangerous in distributed caching systems where cache entries are shared across multiple application instances:
// Vulnerable caching pattern - no token isolation
cache.set('user:' + userId, tokenData, ttl);
// Attacker extracts cached tokens
const cachedTokens = cache.get('user:' + victimId);Time-based cache poisoning represents a third attack vector. Attackers manipulate system clocks or cache TTL values to extend token validity periods beyond intended limits. This often occurs when servers cache token validation results without considering clock skew or when using weak timestamp implementations in token payloads.
Cross-tenant cache poisoning is especially problematic in multi-tenant Bearer Token systems. When cache keys are derived from predictable tenant identifiers without proper namespace isolation, an attacker can craft tokens that collide with legitimate cache entries from other tenants:
// Vulnerable - predictable cache keys
const cacheKey = 'tenant:' + tenantId + ':token:' + tokenId;
// Attacker guesses tenant IDs and creates colliding tokensThe final common pattern involves cache poisoning through malformed token claims. Attackers craft tokens with claims that, when cached, overwrite critical security metadata. This is particularly effective against systems that cache authorization decisions based on token claims:
// Attacker token with manipulated claims
const maliciousToken = jwt.sign({
sub: 'victim-user',
role: 'admin', // Escalated privilege
exp: futureTimestamp,
iat: pastTimestamp
}, secret);Bearer Tokens-Specific Detection
Detecting cache poisoning in Bearer Token systems requires a multi-layered approach that examines both runtime behavior and cached content. The most effective detection strategy combines automated scanning with manual validation techniques.
Automated detection begins with analyzing cache key generation patterns. Vulnerable implementations often use predictable or insufficient entropy in cache keys. A secure implementation should use cryptographically random cache keys combined with token-specific identifiers:
// Secure cache key generation
function generateCacheKey(token) {
const tokenHash = crypto.createHash('sha256')
.update(token)
.digest('hex');
const randomSalt = crypto.randomBytes(16).toString('hex');
return `token-cache:${tokenHash}:${randomSalt}`;
}Response header analysis is critical for detecting cache poisoning vulnerabilities. Look for missing or weak cache control headers in token responses:
// Vulnerable - missing cache controls
const vulnerableResponse = {
headers: {
'Content-Type': 'application/json'
},
body: { token: 'eyJ...' }
};
// Secure - explicit cache controls
const secureResponse = {
headers: {
'Content-Type': 'application/json',
'Cache-Control': 'no-store, no-cache, must-revalidate',
'Pragma': 'no-cache',
'Expires': '0'
},
body: { token: 'eyJ...' }
};Runtime monitoring should track cache hit rates and token validation patterns. Unusual cache hit patterns often indicate cache poisoning attempts:
// Monitoring cache behavior
function monitorCacheAccess(cache, token) {
const start = Date.now();
const result = cache.get(token);
const duration = Date.now() - start;
if (duration < 1) {
// Suspiciously fast cache hit - potential poisoning
logger.warn('Suspicious cache hit for token', { tokenId: extractTokenId(token) });
}
return result;
}Token structure analysis helps identify malformed tokens that could be used for cache poisoning. Validate token claims and structure before caching:
function validateTokenStructure(token) {
try {
const decoded = jwt.decode(token, { complete: true });
if (!decoded || !decoded.header || !decoded.payload) {
throw new Error('Invalid token structure');
}
// Check for suspicious claims
if (decoded.payload.iat > Date.now() + 60000) {
throw new Error('Future issued-at timestamp');
}
return true;
} catch (error) {
logger.error('Token validation failed', { error: error.message });
return false;
}
}middleBrick's scanning capabilities specifically target these Bearer Token cache poisoning patterns. The scanner analyzes API endpoints for weak cache controls, predictable cache key generation, and improper token validation before caching. It tests for response splitting vulnerabilities by submitting payloads with CRLF sequences and examines cache isolation mechanisms in multi-tenant scenarios.
Runtime testing with middleBrick includes active probing for cache poisoning by submitting crafted tokens and monitoring how the system handles them. The scanner checks whether tokens with manipulated claims are properly rejected or if they can influence cached authorization decisions.
Bearer Tokens-Specific Remediation
Remediating cache poisoning in Bearer Token systems requires implementing defense-in-depth strategies that address both token handling and cache management. The most effective approach combines secure token design, proper cache configuration, and runtime validation.
Start with secure token design principles. Use tokens with built-in cache poisoning resistance by including nonce values and request identifiers:
function generateSecureToken(payload, secret) {
const nonce = crypto.randomBytes(16).toString('hex');
const requestId = uuidv4();
const token = jwt.sign({
...payload,
nonce,
requestId,
iat: Math.floor(Date.now() / 1000),
exp: Math.floor(Date.now() / 1000) + 3600
}, secret, { algorithm: 'HS256' });
return { token, nonce, requestId };
}
// Validate token nonce on each request
function validateToken(token, secret) {
const decoded = jwt.verify(token, secret);
// Check nonce against recent usage
if (isNonceReused(decoded.nonce)) {
throw new Error('Nonce reuse detected - potential replay attack');
}
return decoded;
}Implement strict cache control headers to prevent unauthorized caching of token responses. This is the first line of defense against cache poisoning:
// Express middleware for secure token responses
function secureTokenResponse(req, res, next) {
res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, private');
res.setHeader('Pragma', 'no-cache');
res.setHeader('Expires', '0');
res.setHeader('Surrogate-Control', 'no-store');
// Add security headers for token responses
res.setHeader('X-Content-Type-Options', 'nosniff');
res.setHeader('X-Frame-Options', 'DENY');
next();
}
// Apply to token endpoints
app.post('/api/auth/token', secureTokenResponse, (req, res) => {
const token = generateSecureToken(req.body, process.env.JWT_SECRET);
res.json({ token: token.token });
});Cache isolation is critical for preventing cross-tenant and cross-user cache poisoning. Use token-specific cache keys with strong entropy:
const cache = new Redis({ /* configuration */ });
class SecureTokenCache {
constructor(redisClient) {
this.client = redisClient;
}
async setToken(token, data, ttl) {
const tokenHash = crypto.createHash('sha256')
.update(token)
.digest('hex');
const cacheKey = `secure-token:${tokenHash}:${Date.now()}`;
// Store with TTL and additional metadata
const cacheData = {
data,
timestamp: Date.now(),
ttl,
validationHash: this.generateValidationHash(data)
};
await this.client.setex(cacheKey, ttl, JSON.stringify(cacheData));
return cacheKey;
}
async getToken(token) {
const tokenHash = crypto.createHash('sha256')
.update(token)
.digest('hex');
const cacheKey = `secure-token:${tokenHash}:${Date.now()}`;
const cached = await this.client.get(cacheKey);
if (!cached) return null;
const cacheData = JSON.parse(cached);
// Validate cache integrity
if (cacheData.validationHash !== this.generateValidationHash(cacheData.data)) {
await this.client.del(cacheKey);
return null;
}
return cacheData.data;
}
generateValidationHash(data) {
return crypto.createHash('sha256')
.update(JSON.stringify(data))
.digest('hex');
}
}Runtime validation should verify token integrity and cache consistency on every request. Implement token replay detection and cache poisoning monitoring:
const tokenCache = new SecureTokenCache(cache);
function tokenValidationMiddleware(req, res, next) {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ error: 'Missing bearer token' });
}
const token = authHeader.substring(7);
// Check for token replay
if (isTokenReplayed(token)) {
logger.warn('Token replay detected', { tokenId: extractTokenId(token) });
return res.status(401).json({ error: 'Invalid token' });
}
// Validate token structure and claims
try {
const tokenData = validateTokenStructure(token);
// Check cache for existing validation
const cachedValidation = await tokenCache.getToken(token);
if (cachedValidation) {
// Verify cache integrity
if (cachedValidation.exp < Date.now() / 1000) {
return res.status(401).json({ error: 'Token expired' });
}
req.user = cachedValidation;
return next();
}
// Perform fresh validation if not cached
const validatedData = await validateTokenWithAuthService(token);
await tokenCache.setToken(token, validatedData, validatedData.exp - Date.now() / 1000);
req.user = validatedData;
next();
} catch (error) {
logger.error('Token validation failed', { error: error.message, tokenId: extractTokenId(token) });
return res.status(401).json({ error: 'Invalid token' });
}
}Continuous monitoring with middleBrick helps maintain these defenses over time. The scanner's continuous monitoring capabilities detect when cache poisoning defenses degrade or when new vulnerabilities emerge in your Bearer Token implementation.