HIGH cache poisoninghapifirestore

Cache Poisoning in Hapi with Firestore

Cache Poisoning in Hapi with Firestore — how this specific combination creates or exposes the vulnerability

Cache poisoning in a Hapi service that uses Cloud Firestore typically occurs when upstream or shared cache keys incorporate attacker-influenced input without strict validation or isolation. Because Firestore documents are often cached by key (for example, using a composite key derived from request parameters), an attacker can manipulate those inputs to cause the service to store or retrieve unintended documents, effectively poisoning the cache for other users.

Consider a Hapi route that caches a Firestore document read by document ID. If the document ID comes directly from query parameters and is used as part of the cache key without normalization or strict allowlisting, an attacker can request IDs that map to sensitive documents. Even if Firestore security rules prevent direct access, the cache layer might return a previously fetched document that was cached under a manipulated key, exposing data that should not be visible.

Another scenario involves caching query results where pagination or filter values are reflected in the cache key. An attacker can vary these parameters to induce cache entries that mix public and private data. For instance, a route that caches users/{userId}/profile might internally use a Firestore query keyed by userId. If userId values are not strictly validated and isolated, the cache can return a profile for userId A when the attacker requests userId B, because the cache key was derived from tainted input and reused across users.

Because Firestore does not inherently manage application-level caching, the responsibility for safe key construction and cache isolation falls on the service. Hapi provides caching plugins and request context, but if the developer does not sanitize and scope cache keys to the authenticated context or tenant, the shared cache becomes a channel for information leakage across users or roles. This is a cache poisoning vector enabled by the interaction between Hapi’s routing and caching mechanisms and Firestore’s document-based data model.

Firestore-Specific Remediation in Hapi — concrete code fixes

To mitigate cache poisoning in Hapi when working with Firestore, enforce strict input validation, isolate cache keys by tenant or user context, and avoid using raw attacker input as part of cache identifiers. The following patterns demonstrate secure approaches with concrete Firestore examples for Hapi.

Validate and normalize identifiers

Always validate IDs against an allowlist or strict pattern before using them in Firestore lookups or cache keys. Use a canonical representation (e.g., lowercase, trimmed) to avoid bypasses via encoding differences.

// Hapi route with validated Firestore document ID
const documentId = request.params.id.toLowerCase().replace(/[^a-z0-9_-]/g, '');
if (!/^[a-z0-9_-]{1,100}$/.test(documentId)) {
  throw Boom.badRequest('Invalid document identifier');
}

const docRef = firestore.collection('profiles').doc(documentId);
const doc = await docRef.get();
if (!doc.exists) {
  throw Boom.notFound('Profile not found');
}
return doc.data();

Scope cache keys by user or tenant

Include an authenticated subject (user ID or tenant ID) in the cache key so that cached entries cannot be shared across contexts. This prevents an attacker from reusing or evicting another user’s cached data.

// Hapi route with user-scoped Firestore caching
const { cache, auth } = request;
const userId = auth.credentials.userId;
const documentId = request.params.id;

const cacheKey = `user:${userId}:profile:${documentId}`;
const cached = await cache.get(cacheKey);
if (cached) {
  return cached;
}

const docRef = firestore.collection('profiles').doc(documentId);
const doc = await docRef.get();
if (!doc.exists) {
  throw Boom.notFound('Profile not found');
}
await cache.set(cacheKey, doc.data(), { ttl: 300 });
return doc.data();

Parameterized queries with field-level validation

When caching query results, include only validated, non-attacker-controlled values in the cache key. Avoid directly interpolating request parameters into Firestore queries that form cache identifiers.

// Hapi route with safe Firestore query caching
const category = request.query.category;
const allowedCategories = ['public', 'featured', 'trending'];
if (!allowedCategories.includes(category)) {
  throw Boom.badRequest('Invalid category');
}

// Use a deterministic, sanitized cache key
const cacheKey = `feed:${category}`;
const cachedFeed = await cache.get(cacheKey);
if (cachedFeed) {
  return cachedFeed;
}

const querySnapshot = await firestore.collection('posts')
  .where('category', '==', category)
  .orderBy('publishedAt', 'desc')
  .limit(20)
  .get();

const rows = querySnapshot.docs.map(d => ({ id: d.id, ...d.data() }));
await cache.set(cacheKey, rows, { ttl: 600 });
return rows;

Use Firestore security rules as a final control layer

While application-level validation and scoped caching are essential, ensure Firestore rules restrict reads and writes to authorized documents only. Do not rely on rules alone to prevent cache poisoning, but use them to enforce tenant boundaries and ownership checks.

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /profiles/{profileId} {
      allow read: if request.auth != null && request.auth.uid == profileId;
      allow write: if request.auth != null && request.auth.uid == profileId;
    }
  }
}

Frequently Asked Questions

What is cache poisoning in the context of Hapi and Firestore?
Cache poisoning occurs when attacker-influenced input affects cache keys used by a Hapi service that reads from Firestore, causing the cache to store or return unintended documents and potentially expose data across users.
How can I prevent cache poisoning in Hapi with Firestore?
Prevent cache poisoning by validating and normalizing identifiers, scoping cache keys by authenticated user or tenant, using parameterized queries with sanitized values, and enforcing Firestore security rules to enforce document-level access controls.