HIGH cache poisoningrocketfirestore

Cache Poisoning in Rocket with Firestore

Cache Poisoning in Rocket with Firestore — how this specific combination creates or exposes the vulnerability

Cache poisoning in the context of a Rocket application backed by Google Firestore occurs when an attacker causes cached representations of a resource to store malicious or incorrect data. Because Rocket does not inherently cache Firestore query results, developers often introduce caching at the application layer (for example via a Redis or in-memory cache) to reduce read load and latency. If the cache key does not incorporate all user-supplied or request-derived inputs, or if the cached data is served without revalidating authorization context, the cache may return data intended for one user to another user or store attacker-controlled data for subsequent requests.

Consider a Rocket handler that accepts a user identifier via a query parameter to fetch Firestore documents:

use rocket::get;
use rocket::serde::json::Json;

#[get("/profile")]
async fn profile(query: rocket::serde::json::Json<serde_json::Value>) -> Json<serde_json::Value> {
    let user_id = query.get("user_id").unwrap_or("me").as_str().unwrap_or("me");
    // Imagine this calls Firestore via a client and caches the result keyed only by user_id
    let cached = CACHE.get_with(user_id, || fetch_firestore_profile(user_id).await);
    cached
}

If the cache key is simply user_id, an attacker could supply another user’s ID and potentially observe another user’s profile if authorization checks are applied after cache retrieval or are absent. Additionally, if the Firestore document itself contains user-controlled fields that are stored back into the cache (for example via an admin or synchronization endpoint), the poisoned cache may later serve attacker-modified data to other users, leading to information leakage or incorrect behavior. This pattern becomes more impactful when combined with Firestore’s real-time listeners: a cached snapshot may be considered fresh even after permissions change, because the cache does not re-validate Firestore security rules on each request.

Another vector involves query parameters that affect Firestore query filters or ordering. If a request includes parameters that modify which documents are returned and those results are cached without canonicalizing the parameters into the cache key, two different requests can map to the same cache entry. Subsequent requests for the canonical key may receive poisoned results that were generated under a different parameter set, potentially exposing private documents or injecting malicious content formatted as code or links.

Because middleBrick tests unauthenticated attack surfaces and checks for issues such as BOLA/IDOR and Data Exposure, findings may highlight endpoints where cache keys are insufficiently scoped or where sensitive Firestore data is returned without adequate authorization checks. Remediation focuses on ensuring that cache keys incorporate user context and authorization scope, and that cached data is never served without revalidating permissions against Firestore rules on each request.

Firestore-Specific Remediation in Rocket — concrete code fixes

To mitigate cache poisoning when using Rocket with Firestore, ensure that cache keys incorporate the full request context, including user identity, authorization roles, and all parameters that affect the query or document representation. Avoid caching sensitive data unless necessary, and when caching is required, enforce strict revalidation against Firestore security rules before serving cached responses.

First, include a user or tenant identifier and a representation of authorization scope in the cache key. If you are using a per-user cache, incorporate the authenticated user’s ID or subject claim. For cases where multiple roles or tenants can access the same document, embed role or tenant context:

use rocket::State;
use rocket::serde::json::Json;
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};

fn cache_key(user_id: &str, roles: &[&str], query_params: &serde_json::Value) -> u64 {
    let mut hasher = DefaultHasher::new();
    user_id.hash(&mut hasher);
    roles.hash(&mut hasher);
    query_params.hash(&mut hasher);
    hasher.finish()
}

Use this key when interacting with your caching layer. In Rocket, you can integrate this with request guards to ensure the user context is available before caching:

use rocket::request::Request;
use rocket::request::FromRequest;
use rocket::serde::json::Json;

struct AuthenticatedUser {
    id: String,
    roles: Vec<String>,
}

#[rocket::async_trait]
impl<'r> FromRequest<'r> for AuthenticatedUser {
    type Error = ();

    async fn from_request(request: &'r Request<'_>) -> rocket::request::Outcome<Self, Self::Error> {
        // extract token, validate, populate user and roles
        // simplified example
        if let Some(token) = request.headers().get_one("authorization") {
            // validate token and fetch user info
            return Outcome::Success(AuthenticatedUser {
                id: "user-123".to_string(),
                roles: vec!["user".to_string()],
            });
        }
        Outcome::Failure((rocket::http::Status::Unauthorized, ()))
    }
}

With this guard, your handler can construct a robust cache key and avoid leaking data across users or roles:

use rocket::get;
use rocket::serde::json::Json;

#[get("/profile")]
async fn profile(
    user: AuthenticatedUser,
    query: rocket::serde::json::Json<serde_json::Value>,
    firestore_client: &State<FirestoreClient>,
) -> Json<serde_json::Value> {
    let params = query.into_inner();
    let key = cache_key(&user.id, &user.roles, &params);

    if let Some(cached) = CACHE.get(&key) {
        return Json(cached);
    }

    let document_id = params.get("document_id").and_then(|v| v.as_str()).unwrap_or("default");
    // Fetch with explicit user-based scoping in the Firestore query or document path
    let doc = fetch_firestore_document(&firestore_client, &user.roles, document_id).await;
    CACHE.insert(key, doc.clone());
    Json(doc)

Second, enforce Firestore security rules on every request and avoid trusting cached data for authorization decisions. Do not skip server-side checks because data was previously cached. For Firestore operations, pass the user’s roles or UID into the query so rules can apply field-level and document-level filters:

async fn fetch_firestore_document(
    client: &FirestoreClient,
    user_roles: &[String],
    document_id: &str,
) -> serde_json::Value {
    // Build a query that respects user roles, e.g., filtering by allowed_departments
    let col = client.collection("profiles");
    let mut query = col.doc(document_id);
    // Apply role-based constraints in your application logic before issuing the read
    // Firestore rules will still enforce permissions at read time
    let snapshot = query.get().await.unwrap_or_default();
    snapshot.data().unwrap_or_default()
}

Third, if you cache query results, ensure the cache is invalidated or updated when data changes. Use Firestore triggers or change streams to purge or update affected cache entries. Avoid caching sensitive or high-risk fields unless encrypted and tightly scoped.

By combining precise cache keys, per-request authorization revalidation, and disciplined Firestore rule usage, you reduce the risk of cache poisoning while still benefiting from reduced read latency. middleBrick can help identify endpoints where cache keys omit authorization context or where sensitive Firestore data is exposed without sufficient checks.

Frequently Asked Questions

Does caching Firestore query responses in Rocket increase security risk?
Yes, if cache keys do not include user and authorization context, cached data can be reused across users, leading to IDOR or information exposure. Always include user identity, roles, and query parameters in the cache key and revalidate permissions on each request.
Can middleBrick detect cache poisoning issues with Firestore-backed Rocket APIs?
middleBrick tests unauthenticated attack surfaces and performs checks such as Data Exposure and BOLA/IDOR that can surface endpoints where cache keys are insufficiently scoped or sensitive data is returned without proper authorization checks.