HIGH cache poisoningkoafirestore

Cache Poisoning in Koa with Firestore

Cache Poisoning in Koa with Firestore — how this specific combination creates or exposes the vulnerability

Cache poisoning in a Koa application that uses Cloud Firestore occurs when an attacker causes cached responses to vary by attacker-controlled data, leading to one user seeing another user’s data or seeing modified data. This typically arises when cache keys are derived from user-supplied inputs without normalizing or isolating tenant or user context, and when Firestore queries embed values directly into paths or queries that later become part of the cache key.

Consider a Koa endpoint that serves a user profile by reading from Firestore and caching the result. If the cache key includes only the profile ID supplied by the client, an attacker can supply another user’s ID and potentially receive cached data that was intended for a different user. Firestore security rules do not prevent application-layer caching mistakes; they enforce read/write permissions at the database level. If the Koa layer does not enforce user-context isolation before caching, the cache becomes a vector for data leakage.

A concrete scenario: a route uses a Firestore document path like users/{userId}/profile. If the Koa route uses the raw userId from the query string or header to build a cache key without validating or scoping it to the requesting user, two different users with different permissions may share the same cache entry. In addition, if the response includes user-specific fields (such as email or role) and the cache does not differentiate by authenticated context, an attacker can probe the endpoint to confirm whether usernames or emails exist in Firestore based on cache hit/miss timing or response differences.

Firestore itself does not introduce cache poisoning, but the way queries and document references are constructed in Kosa can create indirect risks. For example, if a query uses a client-supplied field for ordering or filtering without strict validation, and the result is cached, the same cached ordering or filter may be reused for another user’s context. Also, if Firestore writes include dynamic fields driven by unvalidated input, cached read results may later serve poisoned data to other requests. Because Firestore returns consistent results for the same query within a session, the danger is less about query inconsistency and more about failing to segregate caches by tenant or user context in the Koa layer.

To mitigate these risks, ensure cache keys incorporate tenant or user identifiers derived from server-side context (such as session or token claims) rather than raw client input. Avoid using unvalidated IDs or query parameters as part of cache keys, and normalize inputs before constructing Firestore paths. Combine this with strict Firestore security rules that enforce user-level read/write boundaries so that even if a cache is poisoned, the underlying database enforces isolation.

Firestore-Specific Remediation in Koa — concrete code fixes

Remediation focuses on scoping data access to the requesting user and ensuring cache keys reflect authenticated context. In Koa, use middleware to extract verified user identity (e.g., from JWT or session) and incorporate that into both Firestore queries and cache keys. Never rely on client-supplied IDs alone to determine document paths or cache entries.

Example: a Koa route that safely reads a user profile from Firestore using the authenticated user’s UID and uses a user-aware cache key.

const Koa = require('koa');
const { initializeApp } = require('firebase-admin/app');
const { getFirestore } = require('firebase-admin/firestore');
const app = new Koa();

// Initialize Firebase Admin (run once at startup)
initializeApp();
const db = getFirestore();

// Simple in-memory cache for example; use a robust cache in production
const cache = new Map();

// Middleware to attach a verified user (e.g., from JWT)
app.use(async (ctx, next) => {
  // In practice, verify token and attach user; this is a stub
  ctx.state.user = { uid: 'verified-uid-from-token' };
  await next();
});

app.use(async (ctx) => {
  if (ctx.path === '/profile') {
    const requestingUser = ctx.state.user;
    if (!requestingUser || !requestingUser.uid) {
      ctx.status = 401;
      ctx.body = { error: 'Unauthorized' };
      return;
    }

    // Build cache key using server-side UID, not client input
    const cacheKey = `profile:${requestingUser.uid}`;
    if (cache.has(cacheKey)) {
      ctx.body = cache.get(cacheKey);
      return;
    }

    // Read from Firestore using server-side UID
    const docRef = db.collection('users').doc(requestingUser.uid).collection('profile').doc('current');
    const doc = await docRef.get();

    if (!doc.exists) {
      ctx.status = 404;
      ctx.body = { error: 'Not found' };
      return;
    }

    const data = doc.data();
    // Do not cache sensitive or high-risk fields unnecessarily
    const safeData = { displayName: data.displayName, avatarUrl: data.avatarUrl };
    cache.set(cacheKey, safeData);
    ctx.body = safeData;
  }
});

app.listen(3000);

This approach ensures the Firestore document path includes only server-controlled identifiers and that the cache key is bound to the authenticated user. Even if an attacker manipulates query parameters, they cannot cause one user’s cache entry to be served to another user.

Additional remediation steps include normalizing inputs before using them in Firestore paths, applying strict Firestore security rules that scope reads to the requesting user, and avoiding inclusion of sensitive or high-risk fields in cache entries. For production, replace the in-memory cache with a distributed cache and set appropriate TTLs to limit staleness while preserving isolation.

Frequently Asked Questions

Can Firestore security rules alone prevent cache poisoning in Koa?
No. Firestore security rules enforce database-level access controls, but they do not prevent application-layer caching mistakes. If Koa constructs cache keys using attacker-influenced values without scoping to the authenticated user, poisoned cache entries can be served. You must enforce user context in Koa before caching, and use rules as a secondary boundary.
What is a safe way to construct Firestore document paths in a Koa app to reduce cache poisoning risk?
Use server-side identifiers (e.g., verified UID from authentication) to build document paths and cache keys. Avoid using raw client-supplied IDs or query parameters. Example: db.collection('users').doc(verifiedUid).collection('profile').doc('current'). Combine this with user-bound cache keys and strict Firestore rules.