Race Condition in Firestore
How Race Condition Manifests in Firestore
Race conditions in Firestore occur when multiple operations attempt to modify the same data simultaneously, leading to inconsistent or unexpected results. Firestore's distributed nature and eventual consistency model create unique race condition scenarios that developers must understand and mitigate.
The most common Firestore race condition pattern involves concurrent document updates where the last write wins, potentially overwriting critical changes. Consider a collaborative editing scenario:
// Vulnerable: Last write wins race condition
async function updateUserProfile(userId, updates) {
const userRef = db.collection('users').doc(userId);
await userRef.update(updates);
}
// Two concurrent requests:
updateUserProfile('user123', { points: 100 });
updateUserProfile('user123', { level: 5 });
If both requests execute simultaneously, one user's level update might overwrite the other's points update, resulting in lost data. This occurs because Firestore processes these as separate transactions without awareness of the concurrent operation.
Firestore's transaction mechanism provides atomicity but introduces its own race condition patterns. Nested transactions can deadlock when multiple clients attempt conflicting operations:
// Transaction deadlock scenario
async function transferPoints(fromId, toId, amount) {
const fromRef = db.collection('users').doc(fromId);
const toRef = db.collection('users').doc(toId);
await db.runTransaction(async (transaction) => {
const fromDoc = await transaction.get(fromRef);
const toDoc = await transaction.get(toRef);
if (fromDoc.data().points >= amount) {
transaction.update(fromRef, { points: fromDoc.data().points - amount });
transaction.update(toRef, { points: toDoc.data().points + amount });
}
});
}
When multiple transferPoints calls execute concurrently for the same users, Firestore may retry transactions, causing delays or failures. More critically, if the retry logic isn't properly implemented, users might see inconsistent balances during the race condition window.
Firestore's batched writes introduce another race condition vector. When a batch contains operations on the same document, the order of execution becomes non-deterministic:
// Batched write race condition
const batch = db.batch();
batch.update(docRef, { status: 'pending' });
batch.update(docRef, { status: 'completed' });
await batch.commit();
The final document state depends on which update Firestore processes last, creating uncertainty in the application logic.
Collection-level operations also suffer from race conditions. Concurrent document creations with the same ID, or simultaneous deletions and updates, can produce unpredictable results:
// Collection race condition
const collectionRef = db.collection('notifications');
// Two clients creating notifications with same ID
collectionRef.doc('alert_123').set({ message: 'First' });
collectionRef.doc('alert_123').set({ message: 'Second' });
The winner of this race depends on network latency and Firestore's internal scheduling, making the outcome non-deterministic.
Firestore-Specific Detection
Detecting race conditions in Firestore requires both static analysis and runtime monitoring. Static analysis tools can identify problematic patterns in your codebase, while runtime monitoring catches actual race condition occurrences.
middleBrick's Firestore-specific scanning analyzes your API endpoints for race condition vulnerabilities. The scanner examines:
- Concurrent update patterns without proper synchronization
- Transaction usage that might lead to deadlocks
- Batched writes affecting the same documents
- Collection operations without proper conflict resolution
middleBrick's active testing simulates concurrent requests to your Firestore-backed endpoints, measuring response consistency and identifying potential race windows. The scanner provides a security score (A-F) with specific findings about race condition risks.
Runtime monitoring using Firestore's built-in tools helps detect active race conditions. Cloud Monitoring can track:
// Cloud Monitoring race condition indicators
const raceConditionMetrics = {
transactionRetries: 'firestore/transaction_retries',
writeConflicts: 'firestore/write_conflicts',
concurrentOperations: 'firestore/concurrent_operations'
};
High transaction retry rates or write conflict counts indicate potential race condition hotspots. middleBrick's dashboard aggregates these metrics across your APIs, showing trends and severity levels.
Code analysis for race condition patterns should focus on these Firestore-specific indicators:
| Pattern | Risk Level | Detection Method |
|---|---|---|
| Multiple concurrent updates without transactions | High | Static code analysis |
| Nested transactions on related documents | High | Code review + runtime monitoring |
| Batched writes affecting same document | Medium | Static analysis |
| Collection operations without conflict resolution | Medium | Runtime testing |
middleBrick's CLI tool enables local race condition testing during development:
# Scan your Firestore API for race conditions
middlebrick scan https://api.example.com/users --firestore
# Get detailed findings in JSON format
middlebrick scan https://api.example.com/transactions --output json
The scanner identifies specific vulnerable endpoints and provides remediation guidance tailored to Firestore's capabilities.
Firestore-Specific Remediation
Firestore provides several mechanisms to prevent and mitigate race conditions. The most effective approach depends on your specific use case and consistency requirements.
Atomic field updates eliminate many race condition scenarios by ensuring operations execute as single, indivisible units:
// Atomic increment prevents race conditions
async function addPoints(userId, pointsToAdd) {
const userRef = db.collection('users').doc(userId);
await userRef.update({
points: firebase.firestore.FieldValue.increment(pointsToAdd)
});
}
// Concurrent calls now work correctly
addPoints('user123', 100);
addPoints('user123', 50);
// Result: 150 points added atomically
The FieldValue.increment() method ensures all concurrent increments are applied correctly, regardless of execution order.
Transactions provide stronger consistency guarantees for complex operations:
// Proper transaction with retry logic
async function safeTransfer(fromId, toId, amount) {
const fromRef = db.collection('users').doc(fromId);
const toRef = db.collection('users').doc(toId);
try {
await db.runTransaction(async (transaction) => {
const fromDoc = await transaction.get(fromRef);
const toDoc = await transaction.get(toRef);
const fromData = fromDoc.data();
const toData = toDoc.data();
if (fromData.points >= amount) {
transaction.update(fromRef, {
points: fromData.points - amount,
lastModified: firebase.firestore.Timestamp.now()
});
transaction.update(toRef, {
points: toData.points + amount,
lastModified: firebase.firestore.Timestamp.now()
});
} else {
throw new Error('Insufficient funds');
}
});
} catch (error) {
console.error('Transaction failed:', error);
// Implement retry with exponential backoff
return false;
}
return true;
}
Always include retry logic with exponential backoff for transaction failures, as Firestore may automatically retry transactions that conflict with other operations.
Document-level locking using a status field prevents concurrent modifications:
// Optimistic locking pattern
async function updateWithLock(docId, updates) {
const docRef = db.collection('documents').doc(docId);
// Retry loop for lock acquisition
for (let attempt = 0; attempt < 3; attempt++) {
const doc = await docRef.get();
if (doc.data().locked) {
await new Promise(resolve => setTimeout(resolve, 100 * Math.pow(2, attempt)));
continue;
}
// Attempt to acquire lock
const lockBatch = db.batch();
lockBatch.update(docRef, { locked: true });
lockBatch.update(docRef, { lastLockTime: firebase.firestore.Timestamp.now() });
const lockResult = await lockBatch.commit();
if (lockResult) {
try {
// Perform updates
await docRef.update(updates);
// Release lock
await docRef.update({ locked: false });
return true;
} catch (error) {
// Release lock on error
await docRef.update({ locked: false });
throw error;
}
}
}
return false; // Failed to acquire lock
}
This pattern ensures only one client can modify a document at a time, though it may increase latency for concurrent operations.
Server-side timestamp ordering provides a simple race condition mitigation for certain scenarios:
// Server timestamp for ordering
async function createNotification(userId, message) {
const notificationRef = db.collection('notifications').doc();
await notificationRef.set({
userId: userId,
message: message,
createdAt: firebase.firestore.FieldValue.serverTimestamp(),
processed: false
});
}
Using server timestamps ensures consistent ordering across all clients, preventing client clock skew from affecting race condition outcomes.
For batch operations, ensure no document appears multiple times in the same batch:
// Safe batch write
async function safeBatchUpdate(updates) {
const batch = db.batch();
const docSet = new Set();
updates.forEach(update => {
if (!docSet.has(update.docId)) {
const docRef = db.collection('users').doc(update.docId);
batch.update(docRef, update.fields);
docSet.add(update.docId);
}
});
await batch.commit();
}
This prevents the non-deterministic behavior that occurs when multiple operations target the same document in a single batch.