MEDIUM cache poisoninghapi

Cache Poisoning in Hapi

How Cache Poisoning Manifests in Hapi

Cache poisoning in Hapi applications occurs when an attacker manipulates HTTP headers to store malicious or incorrect responses in shared caches, affecting all subsequent users who request the same resource. This vulnerability is particularly dangerous in Hapi because of its flexible routing system and built-in support for various caching mechanisms.

The most common attack vector involves manipulating the Accept header to trigger different cache entries for what should be semantically identical requests. For example, an endpoint that serves JSON by default might also support XML responses. An attacker can craft requests with specific Accept headers to create cache entries that serve incorrect content types to legitimate users.

// Vulnerable Hapi route - no cache key validation
const routes = [
  {
    method: 'GET',
    path: '/api/user/{id}',
    handler: async (request, h) => {
      const userId = request.params.id;
      
      // No validation of Accept header
      const acceptHeader = request.headers.accept || 'application/json';
      
      // Cache key includes raw Accept header without sanitization
      const cacheKey = `user-${userId}-${acceptHeader}`;
      
      // Retrieve from cache or fetch from database
      let cachedResponse = await cache.get(cacheKey);
      if (!cachedResponse) {
        const user = await db.getUser(userId);
        cachedResponse = { 
          data: user, 
          contentType: acceptHeader.includes('xml') ? 'application/xml' : 'application/json'
        };
        await cache.set(cacheKey, cachedResponse, 300); // 5 minute cache
      }
      
      return h.response(cachedResponse.data)
        .type(cachedResponse.contentType)
        .header('Cache-Control', 'public, max-age=300');
    }
  }
];

This pattern is dangerous because Hapi's default behavior doesn't validate or normalize header values used in cache keys. An attacker can exploit this by sending requests with manipulated headers like:

GET /api/user/123 HTTP/1.1
Host: example.com
Accept: application/xml; version="1.0"; os="windows"; lang="en"; cache-poison=true

The cache will store a separate entry for each unique header combination, and subsequent requests without the malicious header might still retrieve the poisoned cache entry if the cache key generation logic is flawed.

Another Hapi-specific manifestation involves the vary header handling. When endpoints serve different content based on headers like Authorization, Accept-Language, or custom headers, Hapi applications often use the varHeader plugin or manual Vary header configuration without proper validation:

// Vulnerable - improper Vary header handling
const routes = [
  {
    method: 'GET',
    path: '/api/data',
    handler: async (request, h) => {
      // Content varies by custom header
      const customHeader = request.headers['x-custom-header'] || 'default';
      
      // Set Vary header without validation
      h.response(await generateData(customHeader))
        .header('Vary', 'X-Custom-Header')
        .header('Cache-Control', 'public, max-age=600');
      
      return h;
    }
  }
];

Attackers can exploit this by sending requests with crafted X-Custom-Header values that create cache entries serving incorrect data to legitimate users. The vulnerability is amplified when these endpoints are behind reverse proxies or CDN services that respect the Vary header.

Hapi-Specific Detection

Detecting cache poisoning in Hapi applications requires examining both the runtime behavior and the configuration. The most effective approach combines automated scanning with manual code review of cache-related patterns.

Using middleBrick's API security scanner, you can identify cache poisoning vulnerabilities by scanning your Hapi endpoints. The scanner tests for header manipulation patterns and verifies proper cache key generation:

# Scan Hapi API endpoints with middleBrick
middlebrick scan https://api.yourservice.com --output json

# Example output showing cache poisoning findings
{
  "risk_score": 65,
  "category_breakdown": {
    "input_validation": 40,
    "data_exposure": 25
  },
  "findings": [
    {
      "title": "Cache Poisoning via Accept Header Manipulation",
      "severity": "medium",
      "location": "/api/user/{id}",
      "remediation": "Validate and normalize Accept header values before using in cache keys",
      "cve_reference": "N/A"
    }
  ]
}

Manual detection involves reviewing your Hapi route handlers for specific anti-patterns. Look for these red flags in your codebase:

// Code review checklist for Hapi cache poisoning
const cachePoisoningRedFlags = [
  // 1. Direct use of raw headers in cache keys
  /cacheKey.*request\.headers/i,
  
  // 2. Missing header validation before caching
  /cache\.set.*without.*validation/i,
  
  // 3. Dynamic Vary headers without sanitization
  /header.*Vary.*with.*user-controlled/i,
  
  // 4. Content negotiation without cache poisoning protection
  /accept.*header.*without.*validation/i
];

Hapi's plugin ecosystem can help detect these issues. The hapi-auth-jwt2 and hapi-rate-limiter plugins often interact with caching mechanisms, so review their configurations for cache poisoning vulnerabilities:

// Vulnerable plugin configuration
const server = Hapi.server({
  port: 3000,
  host: 'localhost'
});

server.register([
  {
    plugin: require('hapi-rate-limiter'),
    options: {
      // Cache key includes raw IP without validation
      cacheKeyGenerator: (request) => request.info.remoteAddress,
      // No protection against header manipulation
      redis: redisClient
    }
  }
]);

Network-level detection is also crucial. Monitor your Hapi application's cache hit ratios and investigate anomalies where specific header combinations receive disproportionate cache hits. Tools like hapi-pino for logging can help track suspicious cache access patterns.

Hapi-Specific Remediation

Remediating cache poisoning in Hapi applications requires a defense-in-depth approach that validates inputs, normalizes cache keys, and implements proper content negotiation. The most effective solutions leverage Hapi's built-in validation capabilities and caching plugins.

The first line of defense is strict header validation using Hapi's joi validation framework. Here's a secure implementation for content negotiation:

const Joi = require('@hapi/joi');

// Define allowed content types and validation schema
const allowedContentTypes = ['application/json', 'application/xml'];
const acceptSchema = Joi.string().valid(...allowedContentTypes);

const secureRoutes = [
  {
    method: 'GET',
    path: '/api/user/{id}',
    options: {
      validate: {
        params: Joi.object({
          id: Joi.string().alphanum().min(3).max(30).required()
        }),
        headers: Joi.object({
          accept: acceptSchema.default('application/json')
        }).unknown(false)
      }
    },
    handler: async (request, h) => {
      const userId = request.params.id;
      const acceptHeader = request.headers.accept;
      
      // Normalize cache key - remove parameters, standardize format
      const normalizedAccept = acceptHeader.split(';')[0].trim().toLowerCase();
      const cacheKey = `user-${userId}-${normalizedAccept}`;
      
      // Secure cache access with TTL
      let cachedResponse = await cache.get(cacheKey);
      if (!cachedResponse) {
        const user = await db.getUser(userId);
        cachedResponse = {
          data: user,
          contentType: normalizedAccept
        };
        await cache.set(cacheKey, cachedResponse, { 
          ttl: 300,
          metadata: { validated: true }
        });
      }
      
      return h.response(cachedResponse.data)
        .type(cachedResponse.contentType)
        .header('Cache-Control', 'public, max-age=300, must-revalidate')
        .header('Vary', 'Accept');
    }
  }
];

For applications using the hapi-cache plugin, implement cache key normalization through custom generators:

const Catbox = require('@hapi/catbox');
const Memory = require('@hapi/catbox-memory');

// Create cache with custom key normalization
const cache = new Catbox.Client(Memory);

const cacheKeyGenerator = (request) => {
  const normalizedPath = request.path.replace(/[^a-zA-Z0-9/-]/g, '-');
  const normalizedAccept = (request.headers.accept || 'application/json')
    .split(';')[0]
    .trim()
    .toLowerCase();
  
  // Create deterministic cache key
  return `cache:${normalizedPath}:${normalizedAccept}:${request.method}`;
};

// Secure route with proper caching
const secureRoutes = [
  {
    method: 'GET',
    path: '/api/data',
    options: {
      handler: async (request, h) => {
        const data = await generateSecureData();
        
        // Use Hapi's built-in response caching
        return h.response(data)
          .type('application/json')
          .ttl(300)
          .header('Vary', 'Accept-Encoding')
          .header('Cache-Control', 'public, max-age=300, must-revalidate');
      }
    }
  }
];

For applications requiring content negotiation with multiple formats, implement a whitelist approach:

const contentNegotiation = (request, availableFormats) => {
  const acceptHeader = request.headers.accept || 'application/json';
  
  // Parse and validate Accept header
  const acceptedTypes = acceptHeader
    .split(',')
    .map(type => type.split(';')[0].trim().toLowerCase());
  
  // Find first match in whitelist
  const matchedType = acceptedTypes.find(type => 
    availableFormats.includes(type)
  ) || 'application/json';
  
  return matchedType;
};

// Usage in route handler
const routes = [
  {
    method: 'GET',
    path: '/api/resource/{id}',
    handler: async (request, h) => {
      const resource = await db.getResource(request.params.id);
      const bestFormat = contentNegotiation(request, [
        'application/json', 'application/xml', 'text/html'
      ]);
      
      // Always return consistent cache key
      const cacheKey = `resource-${request.params.id}-${bestFormat}`;
      
      return h.response(resource)
        .type(bestFormat)
        .ttl(300)
        .header('Vary', 'Accept');
    }
  }
];

Finally, implement comprehensive monitoring and alerting for cache poisoning attempts:

const monitorCacheAttempts = async (request) => {
  const suspiciousHeaders = [
    'accept', 'x-custom-header', 'user-agent'
  ];
  
  const suspiciousValues = suspiciousHeaders
    .map(header => ({ 
      header: header, 
      value: request.headers[header],
      length: (request.headers[header] || '').length
    }))
    .filter(entry => entry.length > 100); // Arbitrary threshold
  
  if (suspiciousValues.length > 0) {
    // Log and alert on potential cache poisoning
    console.warn('Potential cache poisoning attempt', {
      ip: request.info.remoteAddress,
      path: request.path,
      suspiciousHeaders: suspiciousValues
    });
    
    // Optionally block or rate limit
    await rateLimiter.limit(request.info.remoteAddress);
  }
};

Frequently Asked Questions

How does cache poisoning differ from other Hapi security vulnerabilities?
Cache poisoning specifically targets shared caching mechanisms by manipulating request headers to create poisoned cache entries that affect all users. Unlike injection vulnerabilities that execute malicious code, cache poisoning stores incorrect or malicious data that is served to legitimate users. In Hapi, this often exploits the framework's flexible header handling and content negotiation features, making it distinct from issues like XSS, SQL injection, or authentication bypass.
Can middleBrick detect cache poisoning in Hapi applications?